
Switching AI Foundations Is About to Get Expensive
The case for one vendor, and when to break your own rule.
THE ONE THING TO TAKE AWAY
Switching model vendors is no longer just an API migration. It is context, workflows, and institutional memory. Most operators should lock in now with eyes open; the minority who shouldn't need a sharp framework for when to pivot. Here's that framework.
WHAT'S INSIDE
Why the model layer is the easy part, and where lock-in actually sits
Three layers of hidden dependency most operators haven't mapped
The counterargument, and why it doesn't solve the data-gravity problem
The portability stress test you can run with your team Monday morning
Since Friday: 5 news items worth your attention before you run it
P.S. I'm running a free 30-min Lightning Lesson on Claude Design this Thursday. RSVP here:

Three of us were waiting for pizza last week when the conversation turned to Claude Design, Anthropic's latest product drop. Within minutes, we were on the question every operator I know is asking: Who's winning, OpenAI or Anthropic? And should I be switching?
The three of us had three different answers.
One friend runs a Series C company with 150+ employees. His team is deep in Anthropic's ecosystem. Files, contacts, workflows, all living inside Claude's collaboration tools. He figures the model is 90% as good as anything else, switching would be painful, so why bother?
Intercom doubled its engineering velocity in nine months by going all-in on Claude Code, per Senior Principal Engineer Brian Scanlan speaking on Lenny Rachitsky's podcast. That's what deep dependency looks like when it pays off.
Another friend runs a smaller startup. He switches between OpenAI's GPT-5.4 and Anthropic's Opus depending on the task. GPT-5.4 is better at coding right now; Opus is stronger at everything else. His team is small enough that switching costs barely register.
Earlier that week, at a dinner, I'd heard the opposite take from a health tech founder whose first company went public. He runs his new venture almost entirely on open source models. Vendors had pushed model updates that broke his production performance in a regulated domain one too many times. Open source was the only way to guarantee the model doesn't change underneath you, he said, and he meant it the way people mean things they learned expensively.
My Series C friend is probably right that Anthropic is the best choice for his team today. But when I asked him where exactly his lock-in was accumulating, he didn't have a quick answer. That's the part that stuck with me.
What none of us had done, and what I suspect most operators haven't done either, is map where our AI vendor lock-in is actually accumulating. Not at the model layer, where everyone is looking, but in the workflows and data and tooling decisions around it. By the time you need to switch, those are the layers that make switching expensive.
The leapfrog that started the conversation
The reason we were even having this conversation is that the leapfrogging between OpenAI and Anthropic has compressed to the point where picking a side feels less like strategy and more like timing.
Last week, on April 16, Anthropic shipped Opus 4.7. One hour later, OpenAI fired back by turning Codex into a super-app: background computer use on Mac, an in-app browser, 90+ plugins, a 1M context window for GPT-5.4. A week earlier, GPT-5/fast mode had launched. A week before that, Anthropic had increased rate limits across the board.
None of this is new. But the cycle time is compressing. The best model for a given task changes every few months, and my smaller-startup friend sees it play out in real time: GPT-5.4 pulled ahead on coding; Opus 4.6 is better at writing and complex reasoning. Neither provider dominates everything. Aaron Levie, the CEO of Box, put it this way last week: "If you're building agents, you basically have to use all of them."
When a16z surveyed 100 enterprise CIOs earlier this year, 37% said they run five or more models in production, up from 29% the year before. The multi-model default is already here, which makes the question less about choosing a provider and more about whether your organization can actually move between them when the math changes.
The lock-in isn't where you think it is
When operators worry about AI vendor lock-in, they're usually thinking about the model layer. Can I swap an OpenAI API call for an Anthropic API call? Yes, in an afternoon. The APIs are similar enough, and with longer context windows replacing fine-tuning, your prompts are more portable than they were a year ago.
The same survey found companies pulling back from fine-tuning altogether. One enterprise put it simply: "instead of taking the training data and parameter-efficient fine-tuning, you just dump it into a long context and get almost equivalent results." That shift quietly makes model portability easier. Fewer fine-tuned weights means less sunk cost when you want to move.
Swapping one model API for another is the easy part. The lock-in that actually costs you money sits in the three layers around the model, and most operators I talk to haven't mapped any of them.

The three layers operators haven’t mapped
Context and files. My Series C friend's team has their documents, contacts, and project knowledge living inside Anthropic's tools. That's not model lock-in. That's data lock-in, the same kind of companies created when they dumped everything into Evernote or ran all their conversations through Slack without an export plan.
Workflows and agents. This layer is getting expensive fast. One CIO in the a16z survey put it bluntly: "All the prompts have been tuned for OpenAI. Each one has its own set of instructions and details. Quality assurance of agents is not super easy, so changing models is now a task that can take a lot of engineering time." Multi-step agent chains with custom guardrails and QA loops are not something you rebuild in a weekend.
The application layer. Most operators miss this one entirely. Your lock-in might not be to Claude or GPT. It might be to the app that wraps them. If your team relies on Notion AI (which runs on Claude under the hood), or ChatGPT Enterprise for internal search, or a customer support tool built on GPT-5, you've made a model bet through your tooling choices without realizing it. And as more enterprises buy AI apps instead of building them, that hidden dependency is becoming the norm.
My Series C friend described Cowork the way people describe any tool they've built their workflow around: it works, the team is productive, and there's nothing comparable from another vendor. All of which is true, and all of which is exactly how lock-in accumulates. The tools that make your team most productive are, almost by definition, the ones that make switching most expensive.
The risks are not hypothetical
A few weeks ago, Anthropic shut down Claude API access for a company with 60+ employees without warning, explanation, or an obvious appeals process. The founder posted about it publicly. The company had built deep dependencies on Claude, and overnight those dependencies became liabilities.
Separately, The Information reported (paywalled) that Anthropic changed its pricing model for heavy users, moving to usage-based billing. Companies that built deep dependencies on Claude started paying significantly more.
Neither of these situations is exotic. They're what you'd expect from AI vendor contracts that still lack protections standard in every other enterprise software category: notice periods before termination, data export windows, price change caps, SLAs that mean something. The contracts haven't caught up to the dependency.
The health tech founder I'd talked to earlier that week saw a different version of this. His problem wasn't price or access. It was that vendors updated models without warning and broke production performance. In healthcare, where output consistency is a compliance issue, that's unacceptable. He moved to open source because it's the only way to guarantee the model doesn't change underneath you. The a16z data supports this: open source adoption is highest at large enterprises where data security, regulatory consistency, and production stability matter more than absolute capability.
The counterargument: it doesn't matter because the tools will converge
I made this argument at lunch too. In a year or two, the collaboration tools from Anthropic, OpenAI, and Google will look more alike than different. The primitives (projects, shared context, agent workflows) will become standardized across providers, the way email clients and cloud storage converged.
And that's probably right, eventually. Both providers have virtually unlimited capital and are shipping at the same enterprise use cases with the same priorities. Feature parity tends to follow when the money is this large.

Tool convergence is real. So is the lock-in accumulating underneath it.
But feature parity doesn't solve data gravity. Even if Anthropic and OpenAI ship identical collaboration features next year, migrating 50,000 files and a year of conversation history from one platform to another is a data migration project, not a product comparison. Nobody is building the tooling for that migration because neither vendor has any incentive to make leaving easy. And in the meantime, your lock-in deepens every month your team spends building workflows, storing context, and accumulating institutional memory inside a single vendor's platform.
What to do Monday: the portability stress test
The exercise is simple. Ask your team: if we had to change AI providers tomorrow, how difficult would that be?
If the answer is a week of migration work, you're fine. If it's months, or if nobody is sure, that's the lock-in problem manifesting. Here's the checklist we use to address it:
1. Keep context in a system you own. Store documents, contacts, and project knowledge in a vendor-neutral system: Google Drive, Notion, Confluence, whatever your team already uses. Treat your AI tools as a consumption layer that reads from your canonical source, not as the source of truth itself.
2. Version-control your prompts and workflows. If your agent chains, system prompts, and guardrails live only inside the vendor's platform, you can't port them. Store them in version control. Run periodic evals against alternative models so you know what a migration looks like before you need one.
3. Checkpoint your institutional knowledge. Every quarter, export the decisions, context, and tribal knowledge that your team has built up inside AI conversations. Move it into your own knowledge base. Don't let six months of decisions live only inside a vendor's chat history.
4. Give engineers model flexibility. Don't mandate a single vendor. The performance gap between providers is real and shifts every few months. Let your team use the best model for each task, and you'll naturally avoid single-vendor dependency.
5. Read your contract. Push for notice periods before termination, data export guarantees, and pricing protections. If your AI vendor won't offer terms that are standard in every other enterprise software category, that tells you something about how they view the relationship.
6. Map your application layer. Know which of your tools depend on which models. Your lock-in might be to Notion AI or your customer support platform, not to Claude or GPT directly. The stress test needs to happen at both layers.
Harrison Chase, the LangChain CEO, put the forward-looking version of this on April 20: "Memory will be the great lock-in. Many companies are racing to control memory first." Whatever lock-in the three layers above describe today, the surface area is still expanding.
What this looks like when it goes wrong
A company we worked with last year ran a version of this stress test during their AI audit. They assumed their biggest vendor dependency was their model choice, OpenAI for everything. Turned out their actual lock-in was two layers removed: their customer support platform had integrated GPT-4 natively, their sales team's call analysis tool ran on Claude, and their internal knowledge base had been feeding documents into a third provider's embeddings for six months. Nobody had mapped it. Three different vendor dependencies, none of which appeared on the CTO's radar when he'd approved the original OpenAI contract.
The model API itself could have been swapped in a day. The application-layer dependencies would have taken three months to untangle, assuming the vendors even offered data export. That's the gap the checklist above is designed to close.

Locked-in stack versus portable stack. Same components, different escape routes.
If the stress test above surfaced questions you couldn't answer quickly, score your AI readiness in 4 minutes. The Seeko AI Readiness Scorecard shows you where your foundations stand and what to fix first.
None of this is an argument against using AI aggressively. It's an argument for building on something you can move. In a market where the best model changes every few months, the companies that stay portable are the ones that get to keep choosing the best tool for the job.
SINCE FRIDAY
Five moves worth your attention before you run Monday's stress test.
Anthropic took another $5 billion from Amazon on Monday and committed to $100 billion in AWS spending over the next ten years, tying Claude harder to one cloud. If you run on Anthropic, your provider is now structurally harder to swap off of than it was last week.
Tim Cook will become executive chairman and John Ternus takes over as Apple CEO in a planned transition. A reminder that platform-level leadership turns over far less often than the products do, and the incoming CEO sets the AI posture for the next decade.
Polymarket currently has Anthropic at 81% to hold the best-model title by the end of April, which is the crowd betting real money that the Opus lead sticks through another OpenAI release cycle. Worth knowing if your procurement team is about to lock into a one-vendor stack based on the gap as it stands today, because the gap as it stands today is precisely what the market is pricing in and not where the market expects things to settle.
The NSA is reportedly using Anthropic's restricted Mythos model despite an ongoing Pentagon dispute. Government AI procurement keeps moving under the public feuds, which matters if you sell to federal agencies or their contractors.
Claude Design launched last week as Anthropic's move into the design-tool category, powered by Opus 4.7, which is the most direct evidence yet for the thesis at the top of this issue: lock-in is moving up from the API layer into the applications built on top of it. The workflows you embed into Claude Design over the next quarter will be harder to port to any other vendor than the API calls you make against Claude directly.