It's 2026 and I'm writing this knowing it'll age badly. Everything about AI ages badly. Predictions from 2022 read like cave paintings now. But I think there's value in getting it wrong on the record, so here goes.
By 2040, I think the conversation about AI will be over. Not because we resolved it. Because it'll feel like asking "how do you feel about electricity." It'll be ambient. The debate dies not from consensus but from exhaustion and normalization. Your kids won't understand why you were anxious about it the same way you don't really understand why people were anxious about calculators replacing mathematicians. They'll get it intellectually. They won't feel it.
And that transition — from feeling the strangeness to not feeling it — that's the thing I actually want to talk about. Because I think we lose something in that transition that we won't notice we've lost, and by the time someone points it out, the pointing out itself will feel quaint.
Here's what I mean.
Right now, in 2026, when I use AI to help me think through a problem, there's still a friction. A small one, shrinking every month, but it's there. Some part of my brain goes "wait, did I actually understand that or did I just read something that sounded like understanding?" That friction is useful. That friction is maybe the last immune response of a mind that evolved to think for itself encountering a system that makes thinking feel unnecessary.
By 2040, I think that friction is gone for most people. Not because they decided to stop questioning AI. Because questioning AI will feel like questioning your memory. You know how right now you don't really know if you remember something or if you remember remembering it? There's this weird recursion where the act of recalling changes the recall itself and after enough cycles you genuinely cannot separate the original experience from the story you've told yourself about it. I think that's what happens with AI-assisted cognition at scale and over time. You won't know which ideas are yours. Not in a scary way. In a mundane, completely ordinary way. The way you don't know which of your opinions came from a book you read versus a conversation you had versus something you saw on twitter at 2am and forgot the source of.
This is not a dystopia I'm describing. That's the uncomfortable part. Dystopias are dramatic. This is just... drift.
Let me try a different angle.
Think about expertise. What it means to be an expert at something in 2040. Right now expertise is this bundled thing — you know facts, yes, but you also have intuition, taste, judgment, the ability to smell when something is off before you can articulate why. You earned all of that through years of being wrong and slowly building a model of the world that's detailed enough to catch its own errors. That process is slow and painful and deeply human.
Now imagine a world where a 22-year-old with the right prompting setup can produce output that's indistinguishable from a 20-year veteran's work. Not sometimes. Reliably. What happens to expertise as a concept? I don't think it disappears. I think it undergoes this weird inversion where the rare skill isn't producing good work — because everyone can do that now — the rare skill is knowing whether work is good. Evaluation becomes the bottleneck. Judgment becomes the scarce resource. And here's the problem: judgment is the thing that's hardest to develop without doing the actual work that AI just automated away.
So you get this gap. The people who developed judgment the old way, pre-AI, they're aging out. The people coming up have access to incredible tools but potentially atrophied ability to evaluate what those tools produce. And nobody notices because the outputs look fine. They look better than fine. They look polished and confident and well-structured. The failure mode isn't bad work. It's work that's subtly wrong in ways that require the kind of expertise that's no longer being cultivated to detect.
I keep thinking about this in terms of cooking. Stupid metaphor maybe but stay with me. My grandmother made this dal that was extraordinary and she could taste it at any point during the cooking and tell you exactly what it needed. More salt, more time, the heat is slightly wrong. She couldn't explain how she knew. It was in her hands, she'd say. Decades of cooking had given her a sensing apparatus that operated below conscious thought. Now imagine I have a machine that makes dal that's objectively good. 95th percentile dal. Better than what most humans can make. I eat this dal my whole life. I never develop the sensing apparatus my grandmother had. I can't tell the difference between 95th percentile and 99th percentile. I don't even know there's a difference to notice. The dal is good. Everyone says it's good. The machine's ratings are high. But something is missing that I lack the vocabulary to identify because I never learned the language of that particular quality.
Scale this to everything. Engineering. Medicine. Law. Research. Writing. Policy. Scale it to the domains where the difference between 95th percentile and true mastery isn't a matter of taste but a matter of someone living or dying, of a bridge holding or not, of a policy working or quietly failing in ways that take years to surface.
I think by 2040 we'll have a word for this phenomenon. I don't know what it is yet. Something that captures the specific blindness that comes from having access to very good outputs without the underlying understanding that would let you distinguish very good from truly right. Fluency blindness, maybe. Or competence without comprehension. Whatever we call it, I think it becomes one of the defining features of that era.
And look — I know the counterargument. Tools have always changed what expertise means. Doctors used to memorize everything, now they look things up, and medicine got better. Engineers used to do calculations by hand, now they use software, and bridges got safer. The tool handles the routine, the human handles the judgment. That's the story we tell. And it's true, historically. But I think there's something different this time, and the difference is that previous tools automated the mechanical part of cognition. They didn't automate the feeling of understanding. A calculator doesn't make you think you understand math. But an AI that explains a concept clearly, in natural language, with examples, with apparent reasoning — that can absolutely make you feel like you understood something you didn't. That's new. We've never had a tool that could counterfeit comprehension before.
By 2040 I think the most important question isn't "what can AI do." It's "what have humans stopped being able to do, and do they know it." And I suspect the answer to the second part is mostly no. Not because of some conspiracy or failure of character. Just because it's really hard to miss a capability you never developed. You don't mourn a sense you never had.
The last human skill might not be creativity or empathy or any of the things people like to list when they're being optimistic about human uniqueness. It might just be the willingness to sit with not knowing. To be confused and to resist the urge to make the confusion go away by asking something smarter than you to resolve it. That's not a skill anyone values right now. Sitting with confusion feels like a bug, not a feature. But in a world where instant coherent answers are always available, the ability to stay in the question — to tolerate ambiguity long enough to actually think rather than just consume a thought — that might be the thing that separates people who understand the world from people who merely navigate it successfully.
I think about this a lot because I'm part of the problem. I spend my days making models better at reasoning, at code, at the things that used to require the kind of slow human judgment I just spent two thousand words worrying about. I don't have a resolution for that contradiction. I just think someone building the thing should be honest about what the thing might cost.
I could be completely wrong about all of this. Probably am about the specifics. But I think the general shape is right: the risk isn't that AI takes something from us. It's that we voluntarily set something down and then forget we were ever carrying it.
And by 2040, most people won't even know what I'm talking about.