Anthropic just leased Elon's Memphis supercluster
Anthropic gets Colossus 1, OpenAI's depth gap widens to 3.5x, and Wendy's keeps scaling while Taco Bell pauses.
In this edition:
This week: Anthropic leased Musk's Memphis supercluster, OpenAI's enterprise data shows a 3.5x depth gap, and Wendy's vs Taco Bell makes the case for escalation-first AI design
Under the radar: Anthropic launched Claude Security and a $1.5B services entity
What's on the calendar: OpenAI DevDay London, Microsoft Build, and the first Q1 productivity print with Copilot in the data
Free webinar: Build Notion Agents to Automate Complex Tasks, a 30-minute Lightning Lesson, May 14 at 2 pm ET. Save your seat →
THE WEEK IN AI
THE WEEK IN ONE SENTENCE
Three of this week's biggest stories all point in the same direction: the constraint on enterprise AI has shifted from compute and model access to what companies are actually doing with both.
THREE SIGNALS
01 • Compute
Anthropic took over Elon Musk's Memphis supercluster
On Wednesday, Anthropic signed an agreement to use all of the compute capacity at SpaceX's Colossus 1 data center in Memphis. That is more than 300 megawatts and over 220,000 NVIDIA GPUs coming online inside a month. Hours later, Musk folded xAI into SpaceX and rebranded the products SpaceXAI. The cluster Musk launched in 2024 to train a frontier model that would beat OpenAI and Anthropic is now being used by Anthropic.
Anthropic also doubled the five-hour rate limits on Claude Code for Pro, Max, Team, and Enterprise plans on the same day, and removed the peak-hour throttle on Pro and Max. Teams that have noticed Claude feeling slower over the last month should see that ease up this week.
The bigger signal is for anyone who has been worried about an AI spending bubble. Microsoft, Amazon, Google, and Meta are on track to spend more than $300 billion on AI infrastructure in 2025, and the bear case has rested on the assumption that the spend is concentrated in any one lab's bet. This week complicates that case. xAI's flagship cluster did not get stranded when Musk wound down the model team it was built for. It found a new tenant inside the same business day, because demand for frontier compute is still running ahead of any single company's plan.
For an operator, Claude is going to feel less rate-limited for a while, and the compute story is no longer a reason to delay an internal AI rollout on the assumption that a vendor might fold.
AI READY IN 21 DAYS · FREE
A 21-day AI program. In your inbox. Then it ends.
One email every morning at 7 am ET. Each one is a short read and one real thing to try before lunch. By Day 21, you have nine concrete capabilities, including prompting that hits your bar on the first draft, AI workflows that take five hours a week off your plate, and the language to lead the AI conversation at your company.
Built for ops leaders, COOs, chiefs of staff, founders, and team leads at mid-market companies. The four weeks move you from foundations (effective prompting, catching hallucinations, learning AI with AI) to connect (a personal context layer, daily automation, five hours a week back), to build (reusable skills, agent design and debugging), to lead (tool and model evaluation, leading the conversation at work). Twenty-one mornings, no upsells, no community Slack.
Not sure if it is for you? Take the 2-minute diagnostic.
02 • Adoption
OpenAI's first enterprise data report says the AI gap inside companies is now 3.5x
OpenAI released its first B2B Signals report this week, drawn from de-identified usage across enterprise customers. Firms in the 95th percentile of AI use now consume 3.5x as much intelligence per worker as typical firms, up from about 2x a year ago. Message volume explains only 36% of that gap. The rest is depth, defined as longer prompts, richer context, and more substantive outputs per interaction.
The agent and code numbers are wider. Frontier firms send 16x as many Codex messages per worker as typical firms. Cisco is the named case in the report. Codex helped the company cut build times by about 20%, save 1,500 engineering hours per month, and increase defect-resolution throughput 10 to 15 times. The team described treating Codex "as part of the team" rather than as a tool.
I want to be honest about the bear case. This is OpenAI's data, scoped to OpenAI's products, and OpenAI has every reason to publish a report whose conclusion is "use our advanced tools more." Take the 3.5x as a directional signal, not a benchmark to manage to. The diagnostic underneath it is what is useful. Most internal AI dashboards still measure access: seat count, license utilization, and percentage of employees with a login. According to OpenAI's data, those numbers no longer correlate with results in the way operators have assumed they do. The companies pulling ahead are tracking something closer to depth, including how much of a real workflow is being delegated and what share of multi-step work gets completed end to end. That is a harder thing to measure, and most internal teams have not started.
03 • Deployment
Wendy's AI now takes 86% of drive-thru orders, and the difference between Wendy's and Taco Bell is architectural
The 2025 InTouch Insight QSR Drive-Thru Study, reported this week by AI Adopters Club, ran 120 mystery shops across three AI-equipped chains. AI accuracy averaged 83% against 87% for humans, and staff intervened in 62% of AI errors. The promised easing of the task load did not arrive. Employees were repositioned from taking orders to supervising AI taking orders, with the same headcount on the floor.
But Wendy is the outlier. They launched FreshAI in 2023 with AI as an assistant and human escalation built in from day one. This resulted is 86% of orders being completed without human help, approximately 99% accuracy when staff escalation is included, an 80 basis point improvement in restaurant-level margin, and expansion from 160 to over 500 locations through 2025. Taco Bell tried to build a system that never needed the human. McDonald's did the same thing and walked away after 30 months. Both treated the escalation path as a problem to eliminate rather than a feature to design around. And so customers found the gap before the operators did.
If your company is shipping any customer-facing AI in 2026, including voice, chat, support, or sales, the question worth asking before scoping is not what the model accuracy ceiling looks like. It is what the handoff path is before a customer sees the system, and who owns it. If the answer is that the handoff is something to figure out after launch, the rollout is closer to the Taco Bell pattern than the Wendy's one.
UNDER THE RADAR
While most of the week's coverage went to compute and capex, Anthropic held its first developer conference, Code with Claude, on Tuesday in San Francisco. Daniela Amodei, Anthropic's president, opened by saying that Claude's transition from chatbot to colleague is now complete. The framing is rhetorical. The two product launches underneath it are not.
The first is Claude Security, now in public beta for Claude Enterprise customers. It uses Opus to scan codebases for vulnerabilities, deduplicate alerts, and recommend fixes for developer approval. Anthropic is now competing with dedicated application security platforms that have been the system of record for this category for years.
The second, less discussed but more interesting for the readers of this newsletter, is a new $1.5 billion Anthropic services entity announced jointly with private equity firm Hellman & Friedman, with Apollo and General Atlantic also backing it. The entity is set up to deploy Claude into mid-sized businesses with dedicated implementation engineers attached. Anthropic's framing is "forward-deployed engineering at scale."
That is a meaningful structural move. Until this week, the standard path for a 200-person company that wanted to use AI seriously was to hire a consulting firm or build internal capacity from scratch. A model lab now has its own services arm, capitalized at $1.5 billion, designed to handle the implementation work the labs have historically left to partners. The path through Accenture or Deloitte is no longer the only path, and the path through a model lab's own deployment team is no longer reserved for Fortune 100 companies.
If you are scoping an AI initiative this quarter and someone has already pitched you an outside services partner, ask what their relationship to the model labs looks like, what gets handed off when the engagement ends, and what happens to the integration work if the lab decides to take the customer in-house. None of that information is hard to get. It is just not in the standard procurement checklist yet.
QUOTE OF THE WEEK
❝"We have signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center."
Anthropic announcement, May 6, 2026Two years ago, Elon Musk was posting that Anthropic should be renamed "Misanthropic." This week, he leased them his entire Memphis cluster.
SPONSORED BY CLUTCH
Hire secure AI teammates that work 24/7.
Hire pre-built AI teammates. Give your engineers and operators a platform to ship their own AI apps. Stop losing sleep about what is running where.
Clutch is the platform behind both: pre-built agents for the workflows your ops team should automate first, plus the integration plane your team's vibe-coded apps and Claude Code projects plug into. One platform. Real production. Visible and safe by default.
Built for ops, engineering, and security teams that are tired of the shadow-AI surface area inside their own company.
WHAT’S ON THE CALENDAR
OpenAI DevDay London opens Monday. The company's first European developer event of the year. Watch for which agent and Codex features get the international rollout treatment. The gap between US and international availability has been the quiet friction point for enterprise buyers outside North America.
Microsoft Build keynote runs Tuesday. The question is how much of the May 12 GA wave is genuinely net-new versus a renaming of features already shipping inside Copilot. If the answer is mostly repackaging, the procurement conversation doesn't move. If there are new surface integrations, it does.
The BLS releases its first Q1 nonfarm productivity revisions on Wednesday. The first government data point that should reflect the agentic-Office rollout is that it hit 20 million paid Copilot seats in March. The number won't be definitive, but it's the first time AI-assisted productivity shows up in official economic data rather than vendor case studies.
P.S.
If you sponsor a customer-facing AI deployment this year, voice, chat, support, anything customers can see, hit reply and tell me one sentence about your handoff path. I will send back the two questions I would ask before signing off on the scope. I read every reply.
Have a good weekend,
Haroon