This website uses cookies

Read our Privacy policy and Terms of use for more information.

I have three AI tools open right now.

One of them I use for almost everything. Another is used as a sanity check when the first one doesn't give me what I need. One I keep around out of something I can only describe as professional courtesy.

A few months ago, I caught myself apologizing internally for switching tabs. Not to a person. To a model.

That's a strange thing to notice.

Loyalty to a tool rarely comes from where the tool is best. It comes from where you met it first.

ChatGPT didn't win on benchmarks. It won because it was the first AI that felt like it understood what you were asking. For millions of people, it effectively replaced Google overnight, not because it was more accurate, but because the interaction was different. You asked it something and felt heard. That first valued interaction doesn't let go easily. It's why OpenAI still owns so much mindshare even as the technical gap with competitors has closed.

But the loyalty map has fractured since then.

Copilot embedded itself into companies that already ran on Microsoft. The adoption wasn't a choice, it was infrastructure. Whole organizations woke up one day with an AI already built into the tools they were already required to use. That's not conviction. That's gravitational force at work.

Coders found their way to Claude. Not because of marketing because the outputs were better for their specific work. Claude Code, in particular, became the serious tool for people who needed an AI that could reason through a codebase rather than just complete a line. That reputation spread peer to peer, not through campaigns.

And now the landscape is shifting again. The breakout capability of 2026 isn't better chat. It's agents. AI that can take sequences of actions, not just answer a single question. Anthropic has doubled down here. Which means the loyalty patterns are about to get renegotiated for a lot of people whether they're ready or not.

The second force shaping loyalty is subtler: what you use at work bleeds into what you reach for personally. If your company standardized on one platform, that's what your hands learned. The interface is familiar. The shortcuts are muscle memory. Switching feels like friction even when it might be worth it.

Two paths in, then. First interaction. Workplace default. Neither of them has much to do with which tool is actually best for how you think.

This matters for a reason that goes beyond platform preferences.

Every tool you use consistently is shaping how you think. Not because it's manipulating you, but because repeated interaction with any system creates patterns. You start to anticipate how it will respond. You pre-filter your prompts to avoid its weaknesses. You develop a shorthand.

You adapt to the tool. And the tool, through your continued use, adapts to you.

The printing press reshaped how ideas spread. The smartphone reshaped how memory works. AI is reshaping how we approach thinking itself — and the specific AI you use most is reshaping how your particular mind works.

That's not alarming. But it is worth knowing.

Because if your tool of choice is shaping your thinking, the question worth asking isn't which AI is best in a benchmark.

It's which one is making you think better.

There's a test I've started running on myself.

Not "which model gave me the best output", but "after working with this tool for an hour, do I feel sharper or flatter?"

The distinction matters. Some interactions leave you with more. You followed a line of reasoning further than you would have alone. You caught an assumption you'd been carrying. The output pushed back in a way that was useful.

Other interactions leave you with less. The answer came too fast. You accepted it without examination. The cognitive work got outsourced before you'd done enough of it yourself.

The tool isn't doing this to you. You're doing it, through how you use the tool.

Which means blind loyalty — comfort mistaken for fit — can quietly work against you.

Not because the tool is bad. But because comfort doesn't push.

Praxis

This week: The Loyalty Audit.

Pick the AI tool you use most. Not the one you think you should use — the one you actually open first.

Spend 20 minutes with a different one. Same type of task. Similar prompt.

Then ask two questions:

  1. Where did the output differ and why?

  2. Did the process of working with a different tool surface any assumptions you'd been running on autopilot?

You're not looking for a better tool. You're looking for what your current loyalty has made invisible.

One 20-minute session. That's it.

If the Loyalty Audit opens something you want to keep going and if it surfaces that the way you're working with AI deserves a harder look, that's the work we're doing in Beyond the Prompt.

Cohort 1 is 12 people. Six spots are taken. Six are left. First call: May 12.

Loyalty is how rapport becomes a habit.

The question is whether the habit is still serving you — or whether you're just comfortable.

There's a difference.

– Michael
Founder, The Drop In
& Author of 'Human Traits — a novel exploring humanity's relationship with AI'

Keep Reading