The Human in the Loop Is Not the Problem

We have been here before.

The internet arrived and everyone called it the information highway. Suddenly, all the knowledge in the world was a few clicks away. Libraries, encyclopaedias, academic journals. Democratised overnight. It was extraordinary. It changed everything. Access to information stopped being a privilege and became a right.

Then came the Web 2.0. The read-write web. The shift was subtle but the implications were enormous. People were no longer just consuming information. They were producing it. A blogger could reach the same audience as a columnist at the Times. A musician could distribute directly to listeners on the other side of the world. A teenager with a camera could build a following that dwarfed prime-time television. The tools of participation changed what the internet was for.

Each revolution redefined the relationship between humans and information. First, access. Then, participation.

We are at the beginning of a third revolution. And I think almost everyone is misreading it.


The dominant narrative around AI is about automation. Machines doing the work humans used to do. Pipelines replacing processes. Agents replacing people. The human, in this framing, is the bottleneck. Slow, expensive, error-prone. The goal is to remove them from the loop as efficiently as possible.

I think this is the wrong direction. Not just philosophically. Technically.

Automated pipelines degrade. Each step in a multi-agent chain introduces generality. The AI at step three doesn't have access to the specific intent of the human who started the whole thing. It has only the output of step two, which already contains approximations and small errors. These compound. The further the pipeline runs from its human origin, the more the output resembles something plausible rather than something true. What emerges at the end is technically coherent and practically hollow.

This is not a limitation that the next model version will fix. It is structural. AI does not reason in a vacuum. It reasons from information. Specific information produces good reasoning. Generic information, assembled by another AI that was also working from generic information, produces an elaborate performance of confidence without substance.

The automation promise rests on a misunderstanding of what AI is actually good at.


Reasoning across vast bodies of knowledge in response to a specific, contextualised prompt: that is where AI excels.

Ask a good model about the political dynamics of the late Roman Republic and it will synthesise two thousand years of scholarship in seconds. Ask it to analyse the structural weaknesses in a piece of writing and it will produce observations that would take a human editor hours to formulate. Ask it to cross-reference a set of facts against a body of established knowledge and it will find connections a human researcher might miss entirely.

None of this is about removing the human. It is about making the human dramatically more capable.

In every case, the human provides the thing AI cannot: intent, context, and judgment. The AI provides the thing humans cannot: exhaustive knowledge, instant synthesis, and tireless availability.

This is the right relationship. The question is whether we build tools that honour it.


I have been thinking about this in terms of a third era.

Web 1.0 gave you access to information. You could look things up.

Web 2.0 gave you the ability to interact with information: produce it, share it, build on it.

The AI era, done right, gives you something categorically more powerful: the ability to deploy intelligence on your specific information, in your specific context, at the moment you need it.

Not access. Not participation. Reasoning on demand.

A doctor with an AI that has read every study ever published and can reason about this patient, with this history, in this consultation. That is not automation. The doctor is still the doctor. The doctor is the one who noticed the patient seemed distracted when they mentioned their sleep. Who decided this complaint was worth investigating further when the numbers said otherwise. Who knew which question to ask. The AI informs that decision with a depth of knowledge no single human could hold. The diagnosis is still the doctor's judgment. The responsibility is still the doctor's.


I write fantasy novels, and I build software. My realisation came from both.

As a novelist, I knew the loneliness of the work. Writing a complex series is a strange kind of isolation. You carry dozens of characters, hundreds of worldbuilding details, plot threads that stretch across thousands of pages. And when something isn't working, you are mostly alone with it. The best you could do was pay a developmental editor — hundreds, sometimes thousands of dollars — and wait. You'd get a report. Some inline comments. Good ones, if you were lucky. But they were never fully available to you. They read the book once and moved on.

Then I became a developer. And I started working with VS Code and Copilot.

There was a moment I keep coming back to. I was debugging a piece of code, staring at the wrong file for twenty minutes, convinced the problem was somewhere in the lines in front of me. The agent told me the problem was in a different file entirely. A specific file. A specific line. Line 57, where an interactor was being called in a way that was breaking everything downstream.

That was the moment. Not because it saved me twenty minutes, though it did. But because the agent had read the whole codebase. It knew things about my project that I had temporarily lost track of. It could hold everything simultaneously in a way I couldn't, and reason about my specific problem from that vantage point.

I immediately thought about chapter 12.

Every writer has a chapter 12. The chapter that doesn't feel right but you can't really tell why. You've read it ten times. You've moved paragraphs around. You've changed the dialogue. Something is still wrong and you can't put your finger on it. Imagine an agent that has read your entire manuscript and simply tells you: your protagonist didn't earn this victory. He mostly got lucky when the sword fell into his hands. How about you make him figure out something clever that gets him within reach of the sword, or grab something else and use it as a makeshift weapon?

That would be extraordinary. Not because the AI wrote the solution. It didn't. It pointed at the problem with the specificity that only comes from having read everything, and then handed the problem solving back to the writer, where it belongs.

That is what I am building. That is what Muse is.


I want to say this directly, because it runs against what most of the industry is building toward.

The human is not the problem that AI is here to solve.

The human is the source of the intent, the context, and the judgment that makes AI outputs valuable. Remove the human and you remove the only thing that makes the output mean something. What remains is a system that produces confident, fluent, and increasingly hollow text. Optimised for the appearance of reasoning without the substance of it.

The most powerful AI applications of the next decade will not be the ones that automate the most human work. They will be the ones that give individual humans access to a quality of reasoning that was previously available only to the well-resourced and the extraordinarily lucky.

The doctor in a small clinic reasoning with the same depth as the specialist at a teaching hospital. The first-generation lawyer with access to the same knowledge as the partner at a top firm. The novelist working alone with access to the same editorial intelligence as the writer with a brilliant developmental editor on speed dial. Always available. Always in context. Always reasoning about this story, this chapter, this problem.

That is the revolution worth building.

Web 1.0: everyone gets the information.
Web 2.0: everyone gets to participate.
The AI era, done right: everyone gets access to intelligence that reasons about their specific situation.

The human in the loop is not the problem. The human in the loop is the whole point.

I am building Muse, an AI writing environment for novelists built on this principle. If this resonated, you might want to take a look.

Check out Muse