I have 34 browser tabs open right now. None of them are helpful.
Three are ChatGPT conversations where I asked the same question slightly differently, hoping for a more useful answer. Four are Stack Overflow pages I'll never read because the AI already gave me the code snippet. Two are Wikipedia rabbit holes that started with "Qwen 3 Coder vs Claude Sonnet 4 vs GPT-5 coding benchmarks" and somehow ended at "List of people who died on toilets."
My browser has become a haunted house where good intentions go to die.
But here's the weird part: I'm not dumber than I was five years ago. I'm different-smart. You know how calculators didn't kill off mathematicians? They just made them stop caring about long division and start solving actually interesting stuff.
Maybe we're living through one of those moments. Like, a big one. The kind historians will put in textbooks someday. Remember when the Greeks lost their minds over writing? Plato was convinced it would destroy human memory. (Spoiler: it didn't.)
Now we're doing the same dance, just with AI instead of alphabets.
When Google Was Still a Verb (The Good Old Days)
Remember when you actually had to learn things?
I used to spend hours reading MDN docs to understand how JavaScript closures worked. Not because I enjoyed the academic exercise, but because if I didn't truly get it, my code would break in mysterious ways at 2 AM. There was skin in the game.
Now? I prompt: "Explain closures like I'm a golden retriever who codes" and get a perfectly adequate explanation with examples. Copy, paste, ship. Done.
The actual concept of closures? It hangs around in my brain just long enough for me to hit deploy, then... poof. Gone. Like it was never there.
Is this... bad?
My knee-jerk reaction is yes. We're becoming intellectual passengers in our own lives, letting algorithms do the heavy lifting while our brain muscles atrophy. Like those people in WALL-E floating around in chairs, except instead of forgetting how to walk, we're forgetting how to think.
But then I caught myself using the same argument people made about GPS: "You'll never learn the streets!" And sure, maybe we can't navigate like we used to. But we can also drive to a random address in Gurgaon without spending 20 minutes with a fold-out map and a highlighter.
The skill shifted. From memorizing routes to trusting systems and recognizing when they're wrong.
The Great Information Flood
Here's where it gets interesting (and slightly terrifying).
We're drowning in information, but somehow getting dumber about distinguishing good information from hot garbage. It's like having access to every book ever written, but they're all mixed together with grocery lists, shower thoughts, and fan fiction.
I asked ChatGPT last week about some obscure programming concept. It gave me a confident, well-structured answer with code examples. Beautiful. I used it in my project. Only later did I realize the API it referenced doesn't exist anymore—hasn't existed for three years.
The AI lied with such authority that I didn't even think to verify. And honestly? That's on me. I treated the AI like Google in 2010: mostly reliable, occasionally wrong, but at least pointing to sources I could check. Except AI doesn't point to sources. It is the source, synthesized from a billion sources I'll never see.
And that's when it hit me. We're not just getting lazy about finding answers. We're getting lazy about checking if those answers are any good.
Wait, that's... actually kind of terrifying?
The New Job Description: Chief Vibes Officer of Truth
But maybe this isn't the end of thinking. Maybe it's thinking... promoted?
Stay with me here. Something weird has been happening in my workflow lately. When I actually use AI well, I'm not thinking less. I'm thinking... sideways? Differently?
Instead of burning brain calories on "what are all the ways to solve this," I'm spending them on "which of these five AI-generated approaches won't blow up in my face." The AI dumps a bunch of options on my desk. I get to be picky.
It's like being an editor instead of a writer. Or maybe a curator instead of an artist? I don't know. Less "create from scratch," more "this, not that."
I've started running this little experiment: for every AI-generated answer I use, I force myself to ask three questions:
- Does this actually solve my problem?
- What could go wrong if I'm wrong about this?
- How would I know if it's working?
And you know what's weird? I'm actually thinking more now, not less. Just... different thinking. Like flying instead of walking, I guess?
The Sensemaking Toolkit
Here's what I've learned about staying sharp in the age of instant answers:
Triangulate everything. When I have a gnarly problem, I'll ask the AI for one perspective, then explicitly ask for the opposing view, then check with actual humans who've been there. Three lenses: optimistic AI, pessimistic AI, battle-scarred human.
Prompt yourself like you prompt the machine. I've started asking myself better questions. Instead of "How do I fix this bug?" I ask "What assumptions am I making about why this bug exists?" The AI can help debug code, but only I can debug my own thinking.
Walk without podcasts. This sounds random, but hear me out. My best insights still arrive when I'm not actively seeking them. When I give my brain some quiet time to synthesize all the AI-generated inputs floating around up there. The AI can draft my thoughts, but it can't have my "aha" moments.
Use AI as a thinking partner, not a thinking replacement. Instead of asking "What should I do?" I ask "What am I not considering?" or "Poke holes in this plan." Make the AI your devil's advocate, not your decision maker.
The End of Thinking As We Knew It
So are we witnessing the end of thinking?
Kind of. But also not at all.
We're watching one kind of thinking die. The Google-search-and-synthesize kind. The "let me spend twenty minutes figuring out what everyone else already knows" kind.
Good riddance, honestly. Machines are really good at that stuff.
But we're not witnessing the end of thinking as sensemaking. As connecting dots that don't want to be connected. As asking questions that haven't been asked before. As having taste about what matters and what doesn't.
The AI can tell me how to implement a feature. It can't tell me whether that feature is worth implementing. It can generate a business plan, but it can't tell me if I actually want to start that business.
In a world where answers are cheap, good questions become the luxury item.
The future doesn't belong to people who know the most facts. It belongs to people who can frame the right problems, ask the right questions, and have good taste about which AI-generated solutions actually solve the thing they're trying to solve.
That's not the end of thinking. That's thinking evolved.
Now, if you'll excuse me, I have 34 tabs to close. Well, 33. I'm keeping the one about people who died on toilets. You never know when that might come in handy.