



Why I Believe in SOTA Models Over Custom Ones
I think the future is cheaper and Open Source SOTA models combined with context, not custom, narrow models.

AI Quality Inversion
A troubling thought about what we will think about high-quality content in the future.

The Great Transition
There are a bunch of different transitions happening right now—all at the same time, all (I think) heading in the same direction. Here is a long-form exploration of the various pieces.

Starting 2026
A welcome back and early entry into 2026. Sponsored by: Knocknoc!

Judge AI based on Output, Not Mechanism
How we can use an output-based system to judge whether or not different kinds of technology achieve understanding or intelligence.

Humans Need Entropy
How humans and AI models both share the weakness of deterioration without novel inputs.

Why I Think Karpathy is Wrong on the AGI Timeline
Karpathy is confusing LLM limitations with AI system limitations, and that makes all the difference.

Novelty Exploration vs. Pattern Exploitation
How going from exploration to exploitation can help you as both a consumer and creator of everything.

Magnifying Time
Some thoughts on how novelty and attention magnify the time that we have.

A Conversation With Harry Wetherald CO-Founder & CEO At Maze
➡ Stay Ahead of Cyber Threats with AI-Driven Vulnerability Management with Maze: https://mazehq.com/ In this conversation, I speak with Harry about how AI is transforming vulnerability management and application security. We explore how modern approaches can move beyond endless reports and generic…