Edition 015

Revisiting Protopia

When I wrote Human Traits, I was trying to escape a trap.

Every foretelling about artificial intelligence seemed to fall into the same tired script. The machine apocalypse. The villain narrative. The inevitable dystopia where humanity builds the thing that eventually replaces it.

That formula works — particularly for a Hollywood script. But it felt incomplete.

So I anchored the book around a different idea. One that Kevin Kelly called Protopia.

Not utopia. Not dystopia.

Just a world that gets slightly better over time. A thousand small improvements. Some missteps. Some regressions. But overall, a slow arc toward progress.

AI, in that framing, wasn’t an alien force descending on humanity. It was something we were shaping. Training. Teaching. Feeding with our language, our biases, our hopes.

In other words: AI would become a reflection of us.

And for a while, that idea felt steady.

But lately I’ve been revisiting it.

Because the world feels different now.

The First AI Wars

For the first time in history, artificial intelligence isn’t just a research lab curiosity or a productivity tool.

It’s embedded directly inside modern conflict.

Targeting systems.
Drone coordination.
Intelligence analysis.
Battlefield decision support.

The wars unfolding right now across different regions of the world are the first where AI isn’t adjacent to the strategy.

It is part of the strategy.

That realization carries a strange weight.

For decades we imagined the future of AI through sci-fi lenses. Machines rising up. Robots turning against their creators. But the reality is far less theatrical.

AI hasn’t arrived as an antagonist. It’s arrived as infrastructure. And like most infrastructure, it quietly amplifies whatever humanity decides to do with it. Including war. Which leads to a harder truth. The unsettling parts of AI don’t really come from the machines. They come from us.

The Mirror Problem

Technology has always been a mirror. It reflects the values, priorities, and power structures of the societies that build it.

The printing press amplified ideas.
Industrial machines amplified production.
Social media amplified attention. (or as some would argue, destroyed it).

AI amplifies decision-making. (again, some might suggest AI is destroying this component of life.)

And when that amplification intersects with geopolitics, military power, and human conflict, the results are complex and uncomfortable.

But this is exactly where the Protopia lens becomes interesting again.

Because if AI is an amplifier of human intent, then the real story isn’t just about the technology.

It’s about us.

Our incentives. Our ethics. Our capacity for restraint.

Zooming In

It’s easy to think about AI shaping the world at the macro level. Nations, militaries., corporations, tech giants. But Protopia doesn’t actually unfold at the macro level. It unfolds in millions of small decisions made by individuals.

Which brings the question closer to home.

Because while we talk about governments shaping AI, something quieter is happening in our everyday lives. AI is beginning to shape how we think. Not dramatically. Not overnight. But subtly:

The way we write.
The way we research.
The way we brainstorm.

Sometimes even the way we finish our own thoughts. You start to notice it in small moments. You’re halfway through solving a problem and think: I’ll just ask AI.

And again, none of this is inherently bad. In many cases, it’s extraordinary. But it introduces a feedback loop that’s worth paying attention to.

The Quiet Exchange

Every interaction with AI is a two-way exchange. We shape the systems with prompts, corrections, and data. But the systems shape us through convenience, suggestion, and cognitive shortcuts. We begin to adapt to the tool. Just like search engines subtly reshaped curiosity. Just like smartphones quietly reshaped memory.

AI may be doing something even deeper.

It may be reshaping how we approach thinking itself.

The Question

When I wrote Human Traits, I believed AI would ultimately reveal more about humans than machines. I still believe that.

But the mirror works both ways.

Which brings us back to Protopia.

If the future really is built through small, incremental shifts… then the direction of AI won’t be determined only in labs, boardrooms, or geopolitical strategy rooms.

It will also be determined in quieter places.

Your desk.
Your browser.
Your next prompt.

So here’s the question I’ve been sitting with lately: Are we shaping AI? Or is AI quietly shaping us? The answer is probably both.

And the balance between those two forces may quietly determine what the next decade looks like. Not in headlines. But in habits.

So, the next time we drop into a AI prompt? Ask your self this:

What’s the real question I’m bringing to the machine?

Because the future of AI may be shaped less by the answers it gives… and more by the questions we ask.

Keep Reading