It happened again. You're watching Stick on Apple TV with your partner, and halfway through Episode 3, you both freeze. "Wait," you say, pointing at the screen.
"Who IS that guy? I know him from somewhere." Your partner squints. "He was in that thing... with the car chase... and the thing with the briefcase!"
Five minutes of animated gesturing later, you're both staring at each other, stumped.
Then, like a reflex, you reach for your phone.
Three seconds on IMDb and boom! Mystery solved.
But here's the weird part, you both feel slightly... defeated?
Like you just gave up on a puzzle your brains could have solved if you'd just waited another minute.
That feeling?
That's your brain sending you a distress signal. And according to MIT neuroscientists, we should probably start listening.
The day machines started thinking for us
And we stopped noticing!
I'll never forget my first conversation with ChatGPT. It was like something straight out of a Black Mirror episode…this thing was actually talking to me.
Not just spitting out search results, but having a genuine dialogue. I thought, "Holy shit, we did it. We actually have the world at our fingertips now."
As someone from probably the last generation that had to hunt through library encyclopedias and get frustrated when bookstores didn't have what I wanted, this felt like the ultimate liberation.
No more information scarcity.
No more hitting dead ends.
The entire universe of human knowledge, available 24/7, with a personality that seemed to understand me.
But here's the thing about humans
We're spectacularly good at finding ways to make our lives easier, and then getting so comfortable in that ease that we forget how to do anything hard.
Fast forward to today, and I can analyze complex neuroscience papers in my pajamas, but I literally cannot calculate a restaurant tip without my phone.
Something felt off. So I started paying attention.
The brain scans
MIT researchers were curious about the same phenomenon many AI users were experiencing—the sense of becoming simultaneously smarter and cognitively weaker.
So they did what any good scientists do, they looked inside people's heads.
In their groundbreaking study "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task", researchers led by Anil Nair strapped EEG sensors on 54 volunteers and watched their brains during writing tasks.
🍄 Group 1 used only their brains (like our ancestors).
🍄 Group 2 could use Google search (so 2010).
🍄 Group 3 got ChatGPT (welcome to the future).
The results were stunning.
When people used ChatGPT, their brains went dark.
Not literally, but the neural highways that light up during deep thinking—the networks responsible for memory, creativity, and complex reasoning—showed dramatically reduced activity. It was like watching a bustling city suddenly empty during rush hour.
Even more alarming, 83% of ChatGPT users couldn't accurately recall content they'd created just minutes earlier.
Not yesterday.
Not last week.
Literally minutes after writing their own work, they drew blanks.
The researchers identified a phenomenon they called "cognitive debt"—measurable reductions in neural engagement that persisted even after participants stopped using AI.
Think of your brain as a muscle. Every time you let AI do the heavy cognitive lifting, you're essentially putting your mental biceps in a cast. And like any unused muscle, it begins to atrophy.
The London taxi driver revelation
This reminded me of something fascinating discovered by neuroscientist Eleanor Maguire at University College London.
For over a century, London taxi drivers had to memorize every street, landmark, and route in the city—a mental map of 25,000 streets within a 10-kilometer radius of Charing Cross train station, as well as thousands of tourist attractions and hot spots.
MRI scans showed their posterior hippocampi were significantly larger than average people's—their memory centers had literally grown to accommodate this massive spatial knowledge.
The longer someone had been driving a taxi, the larger this brain region became, with hippocampal volume correlating positively with years of experience.
But than the GPS arrived.
New research following trainee taxi drivers over time showed that those who successfully completed "The Knowledge" training developed enlarged posterior hippocampi, while those who failed showed no such changes.
But as GPS became widespread, this brain development became less necessary.
The parallel is striking.
When we outsource navigation to GPS, we lose spatial memory abilities. When we outsource thinking to AI, we may be losing cognitive abilities we don't even realize we're surrendering.
The surgical dependency warning
The pattern isn't limited to essay writing or navigation.
Medical researchers are documenting similar cognitive dependency in operating rooms.
A 2024 study published in Surgery journal warns that "increasing reliance on AI raises concerns, particularly regarding the potential deskilling of surgeons and overdependence on algorithmic recommendations.
This over-reliance risks diminishing surgeons' skills, increasing surgical errors, and undermining their decision-making autonomy."
The research highlights a troubling paradox
While AI-assisted surgical planning can analyze patient scans and predict complications with superhuman precision, "over-reliance on AI could diminish surgical skills. It's crucial to find the right balance, using AI as an aid while continuing to develop hands-on surgical abilities."
The concern isn't hypothetical.
As surgical AI becomes more sophisticated, medical schools are grappling with how to train surgeons who can work both with and without technological assistance.
The same cognitive debt that makes us forget phone numbers could make future surgeons lose confidence in their clinical judgment when systems fail.
The Cognitive Bicycle Theory
These stories reveal something profound about how we're using AI differently than any tool in human history.
When we invented calculators, we didn't stop understanding math—we freed ourselves from arithmetic drudgery to tackle higher-level problems.
When we built cars, we didn't lose the ability to walk, we gained the ability to travel further.
But AI is different.
It doesn't just amplify our physical or computational abilities. It mirrors our cognitive ones.
And when something looks like thinking, we tend to stop thinking ourselves.
I call this the Cognitive Bicycle Theory.
A bicycle makes you a faster traveler without making you a worse walker.
It amplifies your natural ability to move through the world. AI should work the same way, amplifying your natural ability to think through problems.
But here's what we're doing instead
We're climbing onto the bicycle and forgetting we have legs.
The difference between cognitive amplification and cognitive replacement is simple but crucial!
Amplification: You think first, then use AI to enhance, verify, or accelerate your ideas. Replacement: You let AI think first, then accept or modify its output.
The sequence matters more than the tool.
The algorithmic average trap
Individual cognitive debt is just the beginning. The real danger?
We’re not just thinking less. We’re all starting to think the same.
AI isn’t pulling ideas from some divine cloud of originality.
It’s regurgitating patterns. Trained on the internet’s most common content, it gives us exactly what we’ve already seen, the obvious, the trending, the average.
So when you ask it to help you brainstorm?
You’re basically asking a robot to remix the front page of Reddit, Medium, and Pinterest.
We’re calling it “creativity,” but what we’re actually doing is automating sameness.
The internet already started to feel like a giant echo chamber.
Now we’ve just sped it up, and scaled it. Everyone using the same tools, trained on the same data, producing content that blends into one big beige blur.
Competent? Sure.
But also forgettable. Because innovation doesn’t live in averages. It lives in the weird, the risky, the stuff AI hasn’t seen a thousand times.
And if we don’t start thinking for ourselves again?
We’ll keep polishing content that looks great in a swipe… and disappears just as fast.
The great cognitive divide
Here's where this gets really interesting (and slightly terrifying). We're not just talking about individual productivity anymore. We're accidentally creating cognitive inequality.
Think about it
The people who learn to collaborate effectively with AI, who use it to amplify rather than replace their thinking, will develop superhuman capabilities.
They'll be able to research faster, ideate more broadly, and execute more efficiently than any human in history.
But the people who become dependent on AI, who let it do their thinking for them, will become cognitively fragile.
They'll appear productive in the short term but lack the deep understanding necessary for real innovation or adaptation when systems fail.
We're creating two classes of humans. The AI-amplified and the AI-dependent.
The amplified will thrive in an AI-rich world. The dependent will become... well, replaceable.
The cognitive vaccine: four ways to stay human
The good news? The research also revealed how to build immunity to cognitive debt.
Think of these as vaccines for your brain—small amounts of beneficial friction that keep your mental immune system strong.
1. The "You First" Protocol Always develop your own thinking before engaging AI.
Spend five minutes wrestling with a problem yourself, then bring AI in as a collaborator, not a replacement.
The researchers found that people who generated their own ideas first, then used AI to enhance them, performed dramatically better than those who started with AI.
2. Turn AI into Socrates Instead of asking "Write me a marketing plan," ask "What questions should I consider when developing my marketing strategy?"
This flips AI from answer-machine to thinking-partner.
You do the cognitive heavy lifting; AI helps you spot blind spots.
3. The Analog Anchor Keep a physical notebook where you work through problems by hand before digitizing them. Writing activates different neural pathways than typing and forces slower, more deliberate thinking.
Think of it as cognitive strength training.
4. Embrace Your Weirdness You are the only person with your exact combination of experiences, knowledge, and perspective.
When you engage AI, lead with YOUR context, YOUR constraints, YOUR unique situation.
This forces AI to work with your originality rather than serving up generic solutions.
What if being interesting becomes a superpower?
So what if the AI revolution's biggest casualty isn't jobs, but boring people?
In a world where AI can produce competent, average solutions to most problems, the humans who remain relevant will be those who can produce surprising, unique, deeply human insights.
Being interesting—truly interesting—might become the ultimate competitive advantage.
The irony is beautiful
The very tools that could make us boringly homogeneous also give us the power to become more uniquely ourselves than ever before.
But only if we remember to bring our full humanity to the collaboration.
Back to that actor
So let's return to that moment with Stick and the unrecognizable actor.
That split second of frustration when you can't quite place a face?
That's not a bug in your cognitive system—that's your brain fighting to stay human.
The next time it happens, sit with the uncertainty a little longer.
Let your mind wrestle with the puzzle. Feel the delicious frustration of almost-but-not-quite remembering.
Notice how different neurons fire as you search through your memory, making unexpected connections, following random associations.
That feeling—that beautiful, messy, inefficient process of human thinking—is exactly what we're in danger of losing.
And it's exactly what we need to preserve if we want to remain the most interesting species on the planet.
The choice is ours
We can become the humans from Wall-E, floating through life while our AI assistants do the thinking.
Or we can become something unprecedented—humans whose intelligence is amplified, not replaced, by artificial minds.
The future belongs to those who choose amplification over automation, collaboration over dependence, thinking better with AI rather than letting AI think for them.
Your brain is waiting for you to make that choice.
What will you decide?