A developer's honest essay about AI eroding his coding ability is going viral. After a year of agent-driving 80% of my own code, here's what actually atrophied, what got sharper, and the one specific thing the agent really does take away.
**TL;DR** — A developer's [honest, anxious essay about AI eroding his coding ability](https://jpain.io/god-damn-ai-is-making-me-dumb/) showed up on HN this morning. I've been at the opposite end of the same arc for the last year — building MCP servers full-time, agent-driving 80% of my code — and the thing he's seeing isn't atrophy in any clean sense. What's actually happening is that the muscle being trained has changed. The question isn't "is AI making me dumb." It's "what skill am I now being measured on, and have I noticed the switch?"
## The post that prompted this
The piece is candid and worth reading. The author hasn't written a line of hand-coded production code in ~2 years. He's started "teaching myself how to code by hand again" because his skills have visibly deteriorated. His own drafts read like AI. He almost handed *the essay about AI eroding his skill* to Claude for validation.
That last detail is the giveaway. The thing being eroded isn't coding — it's the willingness to ship without a second opinion.
## What actually atrophies
Here is what is real, in my own dogfooding:
- **Typing speed on greenfield code dropped.** I no longer write a new React component, a new Postgres migration, a new fetch wrapper without the agent. When I have to, I'm slower than I was in 2023. This is real and measurable.
- **My short-term memory for syntax has decayed.** The last time I had to remember the exact CTE syntax for a recursive Postgres query, I blanked. I've delegated that to the agent for a year.
- **Memorized boilerplate is gone.** OAuth dance, Tailwind utility names, the precise prop signature of a library hook I use weekly — gone, in the sense that I can't bang them out from cold.
If "coding" is the activity of typing those things out from cold, then yes, my coding has atrophied.
## What sharpens
What got drastically better in the same window:
- **Reading code I didn't write.** I now scan a 4,000-line unfamiliar Python module in 10 minutes and tell you, accurately, where the load-bearing assumption is. A year ago that was a half-day job. The reason is simple: I do it constantly now, because the agent only gets the right answer when I can frame the right question against its draft.
- **Spotting wrong code that compiles.** Agent output is syntactically perfect by default. Catching the subtle wrong-shape — a missing `await`, a transaction boundary in the wrong place, a `useEffect` that will silently loop — is a different muscle than catching typos. I exercise it eight hours a day now.
- **System-shape taste.** When the marginal cost of "write a new module" drops to near-zero, the constraining skill becomes *deciding which modules should exist*. Naming things, drawing seams, knowing when one good abstraction beats four leaky ones. That muscle is on fire.
- **Prompt design as engineering.** Writing a brief tight enough that an agent produces what I want on the first try is a real, learnable, transferable skill — and it's adjacent to API design, test-case design, and technical writing.
Net: my "type code from cold" skill went from a 9 to a 6. My "see why this large unfamiliar codebase is wrong" skill went from a 5 to an 8. If you measure me on the first axis, I have atrophied. If you measure me on the second, I have leveled up.
## The real cost the essay surfaces
There is something the essay flags that I do feel:
**Voice does not survive the agent loop.** If I ask Claude to draft this post, no amount of "in my voice" instruction recovers what I'd write unassisted. The cadence flattens. The specific verbs I use get sanded off. The dumb little asides — *like this one* — disappear. So for the narrow category of "things where the output has to sound exactly like me," the agent is a tax, not a leverage.
That category is small but it's the most important small category I own. My writing, my talks, my one-on-one DMs to a client. I keep those agent-free on purpose, the same way you'd keep a sourdough starter off the autoclave.
The author's instinct to write the essay by hand was the right one. His mistake was framing the whole exercise as a deficiency. He doesn't need to relearn syntax. He needs to notice that "syntax fluency" is no longer the test.
## What I'd tell someone in his seat
Three concrete things, none of which are "use AI less":
1. **Pick which axis you want to be measured on, on purpose.** If you want to be the person who hand-rolls clean code from cold, ration agent use. If you want to be the person who ships ten times more useful systems by reviewing agent output, lean in and accept the trade-off.
2. **Carve out an agent-free zone for voice.** Writing, naming, talks, founder updates. Things where the brand is *you-shaped*.
3. **Notice what's actually getting sharper.** Most people experiencing skill anxiety in 2026 are measuring themselves on a test that stopped being the relevant one in 2024.
## Why this matters
The "AI is making me dumb" framing is going to be everywhere for the next year. It's seductive, it's true-shaped, and it's wrong in a specific way: it assumes the old skill stack is the one being graded. It isn't. The skill stack rotated. Some muscles are getting smaller, some are getting bigger, and the people who notice the rotation are the ones who'll quietly compound while everyone else is busy mourning their 2023 selves.