The Week OpenAI Blinked — and Anthropic Hit $19 Billion
How the Pentagon handed Anthropic its best week in history.
A Pentagon contract, a ‘Quit GPT’ campaign with 1.5 million sign-ups, and a coding tool that went from zero to $2.5 billion in nine months. What actually happened — and what it means for anyone buying enterprise AI.
On the morning of 28 February 2026, Anthropic walked away from a contract with the US Department of Defence. The DoD had been negotiating with the company for months over the use of its AI models in classified operations. Anthropic’s position was straightforward: it would not allow its technology to be used for domestic surveillance of American citizens or in autonomous weapons systems capable of making lethal decisions without human oversight. The Pentagon wouldn’t accept those terms. Anthropic walked.
Hours later, OpenAI stepped in.
What followed was one of the most damaging weeks in OpenAI’s history and one of the most commercially spectacular in Anthropic’s.
The deal that backfired
OpenAI CEO Sam Altman announced on Friday evening that his company had agreed to deploy its models on the Pentagon’s classified networks. The timing, within hours of Anthropic’s public refusal, struck many observers as opportunistic. It struck some of OpenAI’s own employees the same way.
Internal frustration surfaced publicly within 48 hours. Research scientist Aidan McLaughlin posted on X that he personally did not think the deal was worth it. Others spoke to CNN anonymously, saying they ‘really respect’ Anthropic for standing its ground. The sidewalk outside OpenAI’s San Francisco offices was covered in chalk graffiti. Protesters gathered. A digital campaign called QuitGPT launched and, according to The Times (UK), attracted around 1.5 million sign-ups within days. [Note: some sources reported a higher figure of 2.5 million the lower figure is used here as the more conservative estimate.]
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future.”
Sam Altman, internal memo to OpenAI staff, 3 March 2026
Altman moved quickly. By Monday, he had reworked the contract language to explicitly prohibit the use of OpenAI’s technology for domestic surveillance, including via commercially acquired data such as location information or fitness app records — the grey area Anthropic had specifically flagged as a concern. He also publicly stated that Anthropic should not have been designated a supply chain risk and urged the Pentagon to reverse the decision.
The revisions satisfied a few critics. Legal experts noted that the contract’s protections were anchored in current law, which the government could alter through executive orders. Jessica Tillipman of George Washington University’s law school noted that the published excerpt did not grant OpenAI a free-standing contractual right to prohibit otherwise lawful government use.
The ambiguity was precisely what Anthropic had refused to accept.
Blacklisted — and booming
The Pentagon’s response to Anthropic’s refusal was punitive. Defence Secretary Pete Hegseth designated Anthropic a supply chain risk to national security — a designation, according to MIT Technology Review, that has never previously been publicly applied to an American company. Under the designation, no Pentagon contractor or supplier can conduct commercial activity with Anthropic. President Trump separately directed federal agencies to immediately cease using Anthropic tools. The US Department of State, the Treasury, and the Federal Housing Finance Agency all announced they were ending their use of Anthropic’s services.
Claude surged to the top of the Apple App Store’s free app rankings over the weekend, displacing ChatGPT. The QuitGPT campaign explicitly recommended Claude as an alternative. By Monday, Anthropic’s run-rate revenue had jumped to approximately $19 billion — up from $14 billion just weeks earlier, and from $9 billion at the end of 2025.
Anthropic’s annualised revenue stood at around $1 billion at the end of 2024. The New York Times technology podcast Hard Fork cited roughly 20x growth in just over a year — a trajectory with no real precedent in enterprise software. [Note: The Media Stack has not independently verified the exact multiplier from the Hard Fork episode; the underlying ARR figures — $1B end of 2024 to $19B March 2026 — are sourced separately from Bloomberg and Sacra.] Bloomberg confirmed the $19 billion run-rate figure, citing people familiar with the matter, attributing the growth to the adoption of Anthropic’s AI models and specifically its coding tool, Claude Code.
Claude surged to number one on the Apple App Store. Anthropic’s annualised revenue had reached $19 billion. The Pentagon had just blacklisted them.
The product nobody saw coming
Claude Code launched as a standalone product in February 2025. By February 2026, it was generating $2.5 billion in annualised revenue — more than double its level at the start of the year. According to the analysis platform Sacra, it is the fastest product ramp in the history of enterprise software.
4% of all public commits on GitHub are now authored by Claude Code, according to analysis by SaaStr, with projections suggesting this could reach 20% by the end of 2026. Boris Cherny, head of product at Anthropic for Claude Code, has said he now writes 100% of his daily code through the tool. Across Anthropic’s engineering teams, between 70% and 90% of all code is reportedly produced by it.
In January 2026, Anthropic launched Cowork, described internally as ‘Claude Code for general computing’to automate white-collar tasks including spreadsheet work, report drafting, and file management. According to SaaStr, the launch triggered a significant sell-off in global software stocks, with the sector losing roughly $2 trillion in market capitalisation as investors repriced the disruption risk to traditional enterprise software vendors.
Anthropic’s enterprise model has always been different to OpenAI’s. Claude has approximately 5% of ChatGPT’s consumer user base but accounts for around 40% of OpenAI’s revenue. The company monetises at roughly $211 per monthly user, compared with OpenAI’s $25 per weekly user, an 8x difference in revenue efficiency. Eight of the Fortune 10 are now Claude customers. Over 500 companies spend more than £1 million annually on Anthropic products.
A paradox, not a clean win
Anthropic is simultaneously the fastest-growing enterprise software business in history and a company politically blacklisted by its largest potential customer. The supply chain risk designation, if it survives legal challenge, could force disentanglement of Claude across Pentagon contractor networks, despite the fact that, as MIT Technology Review noted, Claude is currently the only AI model actively deployed in the military’s classified systems, including in operations in Venezuela. Anthropic has said it will sue if the designation is pursued.
The Pentagon is also in a strange position: it has escalated against the only AI company whose product it was actually using in live classified operations, while simultaneously pressuring that company to accept terms that its own employees and independent legal experts say are weaker than what Anthropic was seeking.
Enterprise AI is no longer a back-office experiment. It is core infrastructure embedded in classified government networks, driving product development timelines, and generating the commercial numbers that are reshaping how investors value the entire software sector. Companies selecting AI vendors now are making decisions with consequences that extend well beyond the immediate use case.
The OpenAI-Anthropic divergence — one company built on consumer scale, the other on enterprise trust and safety positioning — is becoming a genuine differentiator in buying decisions. The QuitGPT campaign is consumer sentiment, not B2B procurement. But it points in the same direction. Brand trust in AI now moves commercial decisions.
Anthropic’s growth was already extraordinary before this week. The Pentagon’s intervention may have accelerated it.







