We have been here before.
The internet arrived and everyone called it the information highway. Suddenly, all the knowledge in the world was a few clicks away. Libraries, encyclopaedias, academic journals. Democratised overnight. It was extraordinary. It changed everything. Access to information stopped being a privilege and became a right.
Then came the Web 2.0. The read-write web. The shift was subtle but the implications were enormous. People were no longer just consuming information. They were producing it. A blogger could reach the same audience as a columnist at the Times. A musician could distribute directly to listeners on the other side of the world. A teenager with a camera could build a following that dwarfed prime-time television. The tools of participation changed what the internet was for.
Each revolution redefined the relationship between humans and information. First, access. Then, participation.
We are at the beginning of a third revolution. And I think almost everyone is misreading it.
The dominant narrative around AI is about automation. Machines doing the work humans used to do. Pipelines replacing processes. Agents replacing people. The human, in this framing, is the bottleneck. Slow, expensive, error-prone. The goal is to remove them from the loop as efficiently as possible.
I think this is the wrong direction. Not just philosophically. Technically.
Automated pipelines degrade. Each step in a multi-agent chain introduces generality. The AI at step three doesn't have access to the specific intent of the human who started the whole thing. It has only the output of step two, which already contains approximations and small errors. These compound. The further the pipeline runs from its human origin, the more the output resembles something plausible rather than something true. What emerges at the end is technically coherent and practically hollow.
This is not a limitation that the next model version will fix. It is structural. AI does not reason in a vacuum. It reasons from information. Specific information produces good reasoning. Generic information, assembled by another AI that was also working from generic information, produces an elaborate performance of confidence without substance.
The automation promise rests on a misunderstanding of what AI is actually good at.
Reasoning across vast bodies of knowledge in response to a specific, contextualised prompt: that is where AI excels.
Ask a good model about the political dynamics of the late Roman Republic and it will synthesise two thousand years of scholarship in seconds. Ask it to analyse the structural weaknesses in a piece of writing and it will produce observations that would take a human editor hours to formulate. Ask it to cross-reference a set of facts against a body of established knowledge and it will find connections a human researcher might miss entirely.
None of this is about removing the human. It is about making the human dramatically more capable.
In every case, the human provides the thing AI cannot: intent, context, and judgment. The AI provides the thing humans cannot: exhaustive knowledge, instant synthesis, and tireless availability.
This is the right relationship. The question is whether we build tools that honour it.
I have been thinking about this in terms of a third era.
Web 1.0 gave you access to information. You could look things up.
Web 2.0 gave you the ability to interact with information: produce it, share it, build on it.
The AI era, done right, gives you something categorically more powerful: the ability to think with all of human knowledge behind you. Not Automation. Augmentation.
After access and participation, a third era — Cognitive Amplification.
A doctor with an AI that has read every study ever published. When a patient presents with symptoms that don't quite fit, the AI can cross-reference that specific combination against the full body of medical literature in seconds — including rare presentations the doctor may never have encountered in thirty years of practice. It can flag the drug interaction buried in a study with forty patients. It can surface the pattern the doctor suspected but couldn't prove.
The doctor is still the doctor. The one who noticed the patient hesitated before answering. Who sensed something was off before any test confirmed it. Who knew which question to ask. The AI answers that question with a depth of knowledge no single human could hold. The diagnosis is still the doctor's. The responsibility is still the doctor's.
That is Augmentation.
I write fantasy novels, and I build software. Two worlds that don't usually intersect. But their intersection is exactly the reason for this post.
As a novelist, I knew the loneliness of the work. Writing a complex series is a strange kind of isolation. You carry dozens of characters, hundreds of worldbuilding details, plot threads that stretch across thousands of pages. And when something isn't working, you are mostly alone with it. The best you can do is pay a developmental editor hundreds, sometimes thousands of dollars, and wait. You'll get a report. Some inline comments. Good ones, if you're lucky. But they are never fully available to you. They read the book once and move on.
Then I became a developer. And I started working with VS Code (a software development environment) and Copilot (an AI-powered coding agent integrated into VS Code).
One very normal day, I was debugging a piece of code, staring at a file with nearly four hundred lines of code for twenty minutes, convinced the problem was somewhere in there. The server logs certainly seemed to indicate it. Then I asked the agent. It took about twenty seconds to tell me the problem was in a different file entirely. A specific file. A specific line. Line 57, where an interactor was being called in a way that was breaking everything downstream.
That was the a-Ha moment. Not because it saved me twenty more minutes (or who knows how much longer), though it did. But because the agent had read the whole codebase. It knew things about my project that I had temporarily lost track of. It could hold everything simultaneously in a way I couldn't, and reason about my specific problem from that vantage point. It couldn't find the bug unprompted, certainly not without my very specific guidance. But it found it in seconds.
I immediately thought about my novels.
Every writer has been through this. You finish the first draft of a chapter, read it in one go and immediately realize that something doesn't feel right. You can't really tell why, but it's simply not working. So you read it ten times. Move paragraphs around. Change the dialogue. But it's still wrong and you can't put your finger on it. Now imagine an agent that has read your entire manuscript including every single support document and just tells you: your protagonist didn't earn this victory. He mostly got lucky when the sword fell into his hands. How about you make him figure out something clever that gets him within reach of the sword, or grab something else and use it as a makeshift weapon?
That would be extraordinary. Not because the AI wrote the solution. It didn't. It flagged a problem that you immediately recognize as soon as it's in front of you, and then handed the problem-solving back to you, the writer, where it belongs. You know your craft. You know your protagonist needs agency. You simply had too many plates spinning in the air and a tiny little detail slipped through.
It's a cognitive amplifier. That is what I am building. That is what Muse is.
I want to say this directly, because it runs against what most of the industry is building toward.
The human is not the problem that AI is here to solve.
The human is the source of the intent, the context, and the judgment that makes AI outputs valuable. Remove the human and you remove the only thing that makes the output mean something. What remains is a system that produces confident, fluent, and increasingly hollow text. Optimised for the appearance of reasoning without the substance of it.
The most powerful AI applications of the next decade will not be the ones that automate the most human work. They will be the ones that give individual humans access to a level of cognition that feels truly super-human. In the past, those with access to vast resources, or the extraordinarily lucky, could mimic a fraction of this power. If we build the right tools, everyone will.
The doctor in a small clinic reasoning with the same depth as the specialist at a teaching hospital. The first-generation lawyer with access to the same knowledge as the partner at a top firm. The novelist working alone with access to the same editorial intelligence as the writer with a brilliant developmental editor on speed dial. Always available. Always in context. Always reasoning about this story, this chapter, this problem.
That is the revolution worth building.
Web 1.0: everyone gets the information.
Web 2.0: everyone gets to participate.
The AI era, done right: everyone gets access to amplified intelligence.
The human in the loop is not the problem. The human in the loop is the whole point.
I am building Muse, an AI writing environment for novelists built on this principle. If this resonated, you might want to take a look.
Check out Muse