Every conversation about AI lands in one of two camps.

Camp one: AI will replace human work. The curve is steep, the timeline is short, most jobs are in play.

Camp two: AI augments human work. It’s a tool, like the spreadsheet or the search engine. Humans stay in the loop. Productivity goes up.

Both camps are having the wrong argument.

The interesting question isn’t what does AI replace or what does AI amplify. It’s: what becomes possible that wasn’t possible before, for either?

The telescope problem

When Galileo pointed a telescope at Jupiter, he didn’t just see farther. He saw something that couldn’t have been described before the instrument existed — moons that orbital mechanics hadn’t predicted, motion that required new vocabulary to explain.

The telescope didn’t augment human vision. It opened a new category of observation entirely.

I think we’re at a similar moment with AI and human collaboration. Not augmentation. Not replacement. The emergence of a third collaborator — something that is neither the human nor the machine, but the product of their interaction.

What the third collaborator looks like

I’ve been running a small experiment for six months. Every significant decision I make — product, technical, personal — I work through with an AI before I commit to it.

Not to outsource the decision. To stress-test my reasoning before it hardens.

What I’ve found is that the output of these sessions isn’t what either party brought in. The AI brings breadth — patterns from ten thousand adjacent situations. I bring depth — the specific context, the constraints that aren’t in any training set, the intuition I can’t fully articulate.

The decision that emerges is neither the AI’s recommendation nor my initial instinct. It’s something built from the friction between them.

That’s the third collaborator.

Where society fits

Here’s what keeps me up at night: the third collaborator isn’t equally accessible.

The people who know how to work with AI well — who can prompt precisely, who understand the failure modes, who can tell when the model is confabulating versus when it’s genuinely reasoning — those people are compounding their advantage every day.

Everyone else is getting summaries.

This is the society problem. Not that AI exists. Not even that it’s powerful. That the interface is opaque and the learning curve is steep and the people who figure it out first will shape the tools for everyone who comes after.

VRA-Lab is, in part, my answer to this. Build tools that make the third collaborator visible — the decisions, the reasoning, the knowledge that emerges from good human-AI interaction — and share them openly.

What I’m not saying

I’m not saying AI is neutral. It isn’t. It encodes assumptions, amplifies biases, optimizes for proxies.

I’m not saying the third collaborator is always wise. It can be confidently wrong in ways neither party would be alone.

And I’m not saying the distribution problem solves itself. It won’t.

But I am saying that the frame of replacement versus augmentation is too small for what’s actually happening. We need a new frame.

The third collaborator is real. The question is what we build with it.