What if my company won't allow this?
I must have heard that question a dozen times while designing Beyond the Prompt.
Not from one person. From almost everyone I talked to. Knowledge workers, executives, people who were already using AI daily but hadn't built anything systematic yet. The question wasn't theoretical. Their machines are locked. IT has policies. Corporate devices run managed software. Claude Desktop requires installation. It's not on the approved list.
It's a real constraint. I didn't have a clean answer at first.
Then I remembered we've been here before.
In 2007, the iPhone appeared in the workplace.
IT departments panicked. Employees started using personal smartphones to check corporate email, open company documents, all on devices that IT didn't control, didn't manage, and couldn't wipe (like they could with Blackberry, which was built for corporate email). The devices were personal. The data they touched was not.
Companies scrambled for years. The answer that eventually emerged was BYOD — Bring Your Own Device. Governance frameworks. MDM policies. Conditional access rules. The enterprise acknowledged what was already true: employees were bringing personal tools into professional contexts. The question wasn't whether that was happening. It was how to manage it without pretending it wasn't.
We're in that moment again. Except this time, the dynamic is reversed.
BYOD was about a personal device accessing corporate resources. The employer was nervous because the hardware was outside their control. If the device was lost or stolen, what happened to the data on it?
BYOA is the inverse.
A personal AI operating system that selectively pulls in work context. The user decides what the OS sees. They choose to share a calendar. They paste in a meeting transcript. They reference a strategy document they already had access to. The employer doesn't govern the container — the individual does.
That's a different power dynamic. Fundamentally. And it's why the instinct to lock it down is understandable, but ultimately misaligned with how this technology works.
Where should an AI operating system live?
My answer is clear. On personal hardware (with caveats, of course).
Not because of workarounds. Because the OS should span your whole life. Health data, personal projects, finances, creative work, the domains of your life that don't belong to any employer. None of that should live on a machine your company owns, can audit, or will reclaim when you leave.
Even if IT approves installation. The question worth asking first: is this machine fully mine?
For most corporate employees, the honest answer is no. Which means the recommendation is: run this on personal hardware. Pull in work context selectively. Calendar read access. Meeting notes you choose to reference. Documents you decide to bring in. You are the bridge. You curate what crosses.
Or consider the deliberate split.
Work agents on the work machine. Personal agents on the personal machine. Not a workaround — a design decision. If this is the only architecture that fits your situation, let the constraint be the feature. Work context stays inside work systems. Personal context stays fully yours. The boundary is the point.
Bridging is still possible without crossing murky policy lines. Calendar read access. Meeting notes you export. Summaries you carry across. The bridge is always you — what you decide to bring is the judgment call. What tools run on which hardware is not.
If your machine is locked and installation isn't possible at all — there's still a door. Browser-based access. Not the full capability, but the operating model. You can start there and migrate to a full setup when the hardware question resolves (And IT policies will inevitably relax)
The workplace will have to figure this out. The same way it figured out BYOD. The same way it eventually accommodated personal email, consumer SaaS, and every other tool people adopted before IT had a policy for it.
Governance always follows adoption. It never leads it.
The workers building personal AI operating systems right now are not violating a policy. Most of the time, they're ahead of one. The question isn't whether companies catch up.
They always do.
The question is what you build while they're catching up — and whether you've made the foundational decision clearly enough that nothing has to be rebuilt when the policy arrives.
Praxis
Before you build anything: make the decision.
One sentence, written down: My AI OS runs on [personal / work / both] hardware because ___.
If you can't fill in the blank because that's the decision you haven't made yet. Everything downstream of this choice depends on it. Architecture, access, what the OS can see, what it can't. Thirty seconds of clarity now saves a lot of headaches later.
Write the sentence. Make the call.
That sentence is where Beyond the Prompt begins.
The cohort doesn't start with tools. It starts with exactly what the Praxis asks: audit where you are, name the decision, and build the architecture from there. Three weeks, small cohort (≤12), starting May 12.
4 spots left. If you've made the call and want to build the rest: puhala.com/beyond-the-prompt. Beta rate: $197.
If you've been waiting to start: the door is open.
— Michael
Founder, The Drop In
& Author of 'Human Traits — a novel exploring humanity's relationship with AI'

