The mental model problem

AI writes code faster than you can understand it

A month ago I refactored a small project with Claude Code. It felt like magic. A couple of prompts → tests passed → deployed → moved on. A week later, the server started crashing. Memory leak. i18n issue I opened the project and had absolutely no idea where to start.

I completely skipped the slow part of coding. Turns out the slow part was the whole point.

The map you’re not building

When you write code yourself, you’re not just typing. You’re building a mental map. Every bug you chase, every weird edge case, every “why the hell is this happening”. It all goes into that map. Eventually you can navigate your codebase because you suffered through it.

When the LLM writes the code, you skip all that. It gives you something that works, looks reasonable, runs fine. Your brain goes “cool, next task.” No map built.

When you’re coding, you’re also thinking about the product. You hit an edge case and suddenly you’re asking “wait, what should actually happen here?” Some of my best product ideas came while writing code, not in meetings. The code forces you to be specific in ways that specs never do.

The bottleneck used to be typing. Now, the bottleneck is understanding. AI lets you skip that too if you’re not careful.

The slot machine

Nir Eyal wrote Hooked about how products create habits through variable rewards. great read AI coding is exactly that. Sometimes it nails something in seconds that would’ve taken you an hour. Instant dopamine surge. So you keep pulling the lever, even when the odds aren’t great.

Going back to manual coding feels impossibly slow. Why would you? Don’t you have emails to check? Other things to do?

The main problem is that the code just… looks correct. Clean variable names. Familiar patterns. Plausible explanations. The right amount of comments. Your brain relaxes. I’ve nodded along to AI code that was confidently, politely, totally wrong. Tests passed. But the logic was broken in a way I only caught because I traced through it manually.

Maybe the worst offender is AI-generated tests. I’ve seen it mock everything, then test the mock. Tests that assert a function returns what it returns. Circular nonsense that looks like coverage but tests nothing. If you’re not paying attention, you’ll merge it and feel good about your test suite.

Sometimes you need to stop prompting and just open your favorite editor. Resist the dopamine hit.

Two kinds of code

I’ve started splitting code into two buckets:

  • Code I don’t need to model: low risk, follows conventions, easy to verify
  • Code I can’t help modeling: business-critical, novel, touches multiple systems

AI handles the first bucket well. I rebuilt RandomWheel from scratch with Claude Code. Small, familiar, easy to verify. Almost one-shotted.

The second bucket is where understanding actually forms. That’s where it gets interesting. That’s the part I can’t afford to skip.

So I try to:

  • Read. Every. Line. Before. Accepting
  • Rewrite parts myself, even when it’s slower
  • If I can’t explain it, I don’t ship it
  • For bigger tasks, write a plan first. It builds the mental model before any code exists
  • Use AI to explain code, not just write it. Using it for comprehension is underrated.

I break these rules all the time. I’m still figuring this out. aren’t we all?

The tools will keep getting better. Enjoy the speed. Just don’t skip the struggle.