
AI Brief: Your AI vendor supply chain just became your attack surface
Today's signal: the AI tools you connected to your systems over the last 18 months are now a supply-chain attack surface, and three breaches in one week are the evidence. Plus, what to do about it, OpenAI going after enterprise, and five other things that moved.
THE AI BRIEF
Today's signal: the AI tools you connected to your systems over the last 18 months are now a supply-chain attack surface, and three breaches in one week are the evidence. Plus, what to do about it, OpenAI going after enterprise, and five other things that moved.
THE READ
Anthropic's Mythos, the cybersecurity AI shared with only Apple, JPMorgan, Nvidia, and a handful of other partners, was accessed by unauthorized users through a third-party vendor environment. Vercel confirmed a breach caused by a stolen OAuth token from a third-party AI tool one employee had connected to, giving attackers access to internal systems, GitHub, and npm tokens.
Vibe-coding platform Lovable was found to have a broken-authorization flaw that let any free account read other users' credentials, source code, and chat history. No jailbreak, no prompt injection, no AI-specific attack class, the industry has been debating for two years.
What the three share is not an AI-model failure. The failure is in the supply chain, and the AI tool is simply the new attachment point for a very old kind of vulnerability. Every AI product your company has connected to its systems over the last 18 months got access through the same mechanisms: OAuth grants into SaaS accounts, API keys into databases, service accounts into the cloud. If any of those vendors gets breached, the blast radius is your systems.
This was always true of third-party integrations, but there are substantially more AI integrations now, and most got adopted in 2024-2025 without the usual procurement review.
Your AI vendor supply chain is a production attack surface that nobody at your company has fully inventoried. You probably cannot name every AI tool with OAuth scope in your Google Workspace, Salesforce, or GitHub org, let alone what scope each holds. That inventory gap is the risk.
The move worth making this week is asking your security team for a full list of AI integrations currently holding OAuth or API credentials in your systems, with scope. Then ask for a date by which each will be reviewed against the same procurement standard you hold other SaaS vendors to. If the list takes longer than a week to produce, that answers a different question about whether AI adoption at your company has outpaced the controls around it.
Another breach in this pattern is coming, and when it does, the companies that have already done the inventory will treat it as a configuration incident, while the ones that haven't will be on calls with outside counsel.
FREE WORKSHOP
I'm running a free 30-min Lightning Lesson on Claude Design later today. RSVP here:

ALSO WORTH KNOWING
OpenAI launches Workspace Agents in ChatGPT
Shared Codex-powered agents for Business, Enterprise, Edu, and Teachers plans, handling long-running workflows across tools and teams. This is the shift from individual-user ChatGPT to enterprise-installed ChatGPT, and the rollout is in production on those plans now, rather than waitlisted.
Claude Code launches /ultrareview as a research preview
One command deploys a fleet of bug-hunting agents in a remote sandbox to find and verify bugs in your branch or pull request before you merge. This is autonomous code review at scale rather than inline suggestion, and it's the pattern every coding-agent vendor is going to have to match within months.
The Information reports OpenAI and Anthropic are pulling back from reasoning chains
Both labs are reducing reliance on explicit reasoning in newer models because pretraining improvements are making dedicated reasoning modes less valuable. Any product strategy built around "reasoning models will keep getting better faster" needs revisiting, including the assumption that inference-time compute scaling is the primary axis of model progress.
Google reports 75% of new code at Google is AI-generated
Sundar Pichai confirmed at Cloud Next '26 that AI-written code at Google has climbed from 50% last fall to 75% today, reviewed by human engineers. The operator question this forces is not "should our engineers use AI-assisted coding" but "what does the post-transition staffing and review structure at our company look like," because Google is now publicly showing everyone what the new baseline is.
SpaceX obtains the right to acquire Cursor for $60 billion
SpaceX and Cursor announced this week that SpaceX can buy the company for $60B by year-end or pay $10B if it doesn't, and CNBC reports Microsoft quietly looked at a bid before pulling out. The specific outcome matters less than the signal, which is that top-tier AI coding tools are now inside the Musk portfolio alongside xAI, Grok, and the SpaceX compute buildout.
WATCHING TOMORROW
GPT-5.5, codenamed "Spud," is widely expected to launch today and is rumored to outperform Claude Opus 4.7 on the key reasoning and coding benchmarks. If it lands, we'll lead Friday's Week in AI with it. Also on the watch list: Anthropic's post-incident writeup on the Mythos breach, which the company has committed to publishing with root-cause detail, and Firebase Firestore Enterprise entering general availability after the Google Cloud Next announcement.Back tomorrow,
Haroon