Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic's revenue surpasses $30 billion, signing major deals with Google and Broadcom for 3.5 gigawatts of computing power.
In the AI compute arms race, long-term compute supply agreements are becoming a competitive factor on par with capital and technology.
Author: Deep Tide TechFlow
**Deep Tide Reading: ** On April 6, Anthropic disclosed that its annualized revenue surpassed $30 billion, up more than twofold from $9 billion at the end of 2025. Within two months, the number of enterprise customers with annual spending above $1 million doubled from 500 to 1,000. On the same day, Broadcom’s SEC filing confirmed that Anthropic will obtain about 3.5 gigawatts of next-generation TPU compute starting in 2027, in what is its largest single compute commitment to date.
On the same day, Anthropic released two major figures.
According to Anthropic’s April 6 official blog, the company’s annualized revenue has surpassed $30 billion, up more than twofold from approximately $9 billion at the end of 2025. On the same day, a filing submitted by Broadcom to the SEC disclosed that starting in 2027, Anthropic will obtain about 3.5 gigawatts (GW) of next-generation TPU compute via Broadcom, as part of an expansion of collaboration among the three parties (Broadcom, Google, Anthropic).
Broadcom’s share price rose by about 3% after hours.
(Source: X user @damianplayer)
14 months from $1 billion to $30 billion—Claude Code is the core engine
Anthropic’s revenue curve has no precedent in the AI industry. According to timelines reported in public disclosures and multiple media outlets including Bloomberg: about $1 billion in December 2024, about $4 billion in mid-2025, about $9 billion at the end of 2025, about $14 billion in February 2026, nearing $19 billion in early March, and official confirmation of a breakthrough to $30 billion on April 6.
In a statement, Anthropic CFO Krishna Rao said the company is making “the most significant compute commitment to date, to match growth that has been unprecedented.”
Client data is just as strong. In February this year, during the G-round financing, there were 500 enterprise customers with annualized spending exceeding $8B; in less than two months, that number doubled to more than 1,000. As Anthropic previously disclosed, the core driver of growth is Claude Code, a product launched in May 2025; by February 2026, its annualized revenue has already exceeded $2.5 billion.
As a reference, Sacra estimates OpenAI’s annualized revenue at roughly $25 billion (as of February 2026). Based on Epoch AI analysis, since Anthropic’s revenue first surpassed $1 billion, its annualized growth rate is about 10x, while OpenAI’s同期 is about 3.4x. Following this trend, the revenue crossover point for the two may occur in mid-2026.
Note: the figures above are all annualized revenue (run-rate revenue), meaning an estimate derived from recent monthly revenue multiplied by 12, not actual cumulative revenue.
The 3.5 GW TPU agreement: the latest piece in Anthropic’s compute map
According to Broadcom’s SEC filing, the core of this agreement is: Broadcom will design and supply next-generation TPU chips for Google, with the supply relationship continuing through 2031; starting in 2027, Anthropic will obtain about 3.5 gigawatts of next-generation TPU compute via Broadcom as part of its “multi-gigawatt” compute expansion plan.
Broadcom added a key qualifier in the filing: “Anthropic’s consumption of expanded compute depends on its continued commercial success.” The three parties are also discussing deployment support with “operational and financial partners.”
This is not the first time Anthropic has signed a large compute agreement. In October 2025, Anthropic and Google Cloud signed a collaboration agreement to gain access to up to 1 million TPU units, expected to bring more than 1 gigawatt of compute in 2026. At a December 2025 earnings call, Broadcom CEO Hock Tan confirmed that Anthropic placed two TPU orders of $10 billion and $11 billion, respectively. At this year’s March earnings call, Tan further said that the company expects to receive about $21 billion in AI revenue from Anthropic in 2026 and more than $42 billion in 2027 (estimated by Mizuho analysts).
On the AWS side, Project Rainier went live in October 2025, deploying nearly 500k Trainium2 chips across multiple data centers in the United States. Amazon has cumulatively invested $8 billion in Anthropic. Anthropic engineers directly participated in the development of the underlying Trainium core kernel and provided design inputs for the next-generation Trainium3 chips.
By this point, Anthropic’s compute sources cover three chip routes (AWS Trainium, Google TPU, NVIDIA GPU) and three major cloud platforms (AWS, Google Cloud, Microsoft Azure). In a special emphasis in its blog, Anthropic notes that Claude is the “only leading AI model available across all three of the world’s major cloud platforms.”
The split from OpenAI Stargate
Anthropic’s compute model stands in sharp contrast to OpenAI’s.
OpenAI chose a heavy asset route: in January 2025, together with SoftBank and Oracle, it established Stargate LLC, aiming to invest $500 billion over four years to build 10 gigawatts of AI infrastructure. OpenAI holds operational responsibility and design control, while Oracle is responsible for construction and SoftBank bears the financial responsibility. To date, Stargate’s planned compute is close to 7 gigawatts, and its cumulative investment commitments exceed $400 billion.
However, during Stargate’s rollout, friction over control emerged among the partners. According to a report by Tom’s Hardware in February this year, OpenAI, Oracle, and SoftBank had disagreements over issues related to data center ownership, causing some projects to be delayed as a result. In addition, OpenAI’s cloud services procurement commitment total has already exceeded $500 billion (Microsoft $250 billion + Oracle approximately $300 billion + AWS approximately $50 billion). It is expected to consume about $17 billion in cash in 2026, with the earliest break-even on cash flow coming in 2030.
Anthropic takes a light-asset route of “not building data centers, not buying chips.” Capex is borne by cloud service providers. Anthropic, as a customer, locks in capacity and pricing through long-term agreements, preserving flexibility to switch among multiple chip routes. The trade-off is not owning infrastructure, which may lead to higher long-term unit costs. According to reports, Anthropic’s gross margin is about 40%, and it is expected to lose about $14 billion in 2026.
The pros and cons of the two models have yet to be determined. OpenAI is betting on economies of scale and infrastructure autonomy, while Anthropic is betting on supply-chain flexibility and capital efficiency. But one fact is already clear: in the AI compute arms race, long-term compute supply agreements are becoming just as important as capital and technology in competitive terms.