AI: Abdication or Augmentation

Yesterday a 22-year-old developer named Austin Kennedy posted something on X that has been viewed over a million times.

“I’m 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I’ve had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit ‘enter’ 75% of the time. And it’s doing something to me… None of us feel as sharp as we used to.”

The replies are predictable. Some people are smug. Some are sympathetic. The standard framing showed up almost immediately: the tools are amazing, but how you use them entails a trade-off with how you use your brain. Never forget there’s a trade-off there.

The trade-off framing isn’t wrong. It’s incomplete.

The framing assumes that AI use necessarily costs cognitive function and the only question is how much. That’s not the whole picture. Two people can have eight terminals open and have completely different cognitive outcomes. The question isn’t how much you use AI. The question is what role you’re letting it play.

There are two modes, and they produce opposite results.

Abdication

Abdication mode is the mode Austin is describing. AI does the thinking. You do the executing. You ask, it answers, you accept, you ship. Hit enter. Hit enter. Hit enter.

Brain softens because you’ve outsourced the part of cognition that builds neural pathways — the wrestling, the articulating, the testing-against-reality. Six to eight terminals running in abdication mode for six months will absolutely deteriorate your sharpness. Austin isn’t wrong about what’s happening to him. He’s just misdiagnosing what caused it.

I’ve been there. Earlier this year I was running editorial passes on one of my novels. I’d run a chapter through Claude, hit enter, glance at the output, hit enter on the next chapter. I told myself I was doing light edits. After two or three chapters I went back and read what had actually shipped, and the AI hadn’t just edited my prose — it had quietly changed what my protagonist was doing. Connection points I’d built in earlier chapters were gone. The story was drifting and I hadn’t noticed because I hadn’t been reading. I’d been hitting enter.

That was my wake-up call. Everything I generate, I have to review. Not just the words. The reasoning. The intention I gave it versus what came back. The places where it made decisions I didn’t authorize.

Abdication doesn’t announce itself. You slide into it.

Augmentation

Augmentation mode is the opposite shape. You are the thinker. AI is the amplifier. You bring a thought. AI sharpens it. You push back. You refine your own articulation against a partner that won’t let vague thinking pass.

The brain doesn’t soften here — it strengthens. You’re now articulating more precisely, more often, against a more demanding interlocutor than most humans you encounter daily. The reps build the muscle, not atrophy it.

The catch is that augmentation requires something to amplify. You can’t sharpen what isn’t there. Cal Newport has written extensively about the cognitive infrastructure people built before AI existed — the deep reading, the long-form thinking, the deliberate practice that produced minds capable of holding complex problems at length. Augmentation works because that infrastructure is there to amplify.

Most people running AI tools today are skipping the step where the infrastructure gets built. They’re trying to amplify thinking they never trained. The tool can’t strengthen what isn’t there.

C.S. Lewis wrote something in the preface to The Great Divorce that I keep coming back to:

“I do not think that all who choose wrong roads perish; but their rescue consists in being put back on the right road. A wrong sum can be put right: but only by going back till you find the error and working it afresh from that point, never by simply going on.”

You don’t undo a wrong math problem by working the problem harder. You go back to the step where the error entered and start again from there.

The same is true here. You don’t recover from six months of abdication by doing more AI work better. You go back to the step where you stopped engaging — wherever that is for you — and you rebuild the engagement. Then the AI can resume amplifying instead of replacing.

What I’d Say to Austin

He’s not in the trouble he thinks he’s in.

He felt something was wrong. He named it. He went on record in front of more than a million people and said his brain was getting softer and he thought the tool was doing it. That admission is the entire game. Most people sliding into abdication don’t notice. Or they notice and they make excuses. He didn’t.

I don’t know the specifics of his work. Six to eight parallel terminals could be a workflow where most prompts are routine approvals and a smaller share require real engagement — that’s a different scenario than rote abdication. Only he knows which it is.

But that 75% number is worth sitting with. Hitting enter most of the time without engaging is where abdication lives, regardless of how anyone arrived there. If most of what’s coming back from the tool isn’t getting read for reasoning — only for output — the cognitive workout has stopped happening. That’s the muscle that softens.

If I were in his shoes, I wouldn’t tell myself to use AI less. I would make sure my understanding was rock solid and that the enter button never got hit unless I knew exactly what was being executed.

That’s the whole discipline, compressed. Know what you’re trying to build before you start. Know what each prompt is asking the tool to do. Read the reasoning before you accept the output. Don’t approve what you didn’t understand.

If he can hold that line, the eight terminals aren’t the problem. They’re a workflow. If he can’t hold that line, the terminals aren’t the problem either — the engagement is, and that’s the thing to rebuild.

Either way, the diagnostic isn’t am I using AI too much. It’s am I still doing the thinking.

The tool isn’t the problem. The relationship to the tool is the problem.

He realized something was off in six months. He’s going to be fine.

He just has to walk back to where he went wrong, and start again from there.

Leave a comment