Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI and Anthropic from "Mockery" to Divergence: The Ethical Battle with the U.S. Department of Defense
Recently, these two AI tech giants publicly mocked each other on media platforms to compete for users. But when the U.S. Department of Defense stepped in, they suddenly had to stand together against pressure from the Pentagon. This battle isn’t just about money or military contracts; it’s about a fundamental question: How should AI be used when national security is on the line?
From “mocking” to alliance: Why two AI companies are facing the Pentagon together
In the past, the dispute between OpenAI and Anthropic was intense. Claude was developed by a team that left OpenAI, and both companies fiercely competed for users, enterprise clients, and investment. Last year’s Super Bowl saw Anthropic spend money to mock OpenAI’s ChatGPT in televised ads. That’s how tense things were.
However, everything changed when Anthropic signed a $200 million contract with the U.S. Department of Defense. Claude became the first AI model deployed on the military’s secret network, supporting intelligence analysis and mission planning. But Anthropic set clear conditions: Claude cannot be used for large-scale surveillance of American citizens, nor developed into autonomous weapons without human oversight.
The Pentagon didn’t accept this. Their stance was simple: when they buy a tool, they will use it freely, without tech companies telling them what to do. Defense Secretary Hegseth issued a direct ultimatum: Anthropic must agree by 5:01 PM Friday, or face consequences. Anthropic’s CEO refused. He publicly stated that while he understands AI’s importance for national defense, in some cases, AI can undermine democratic values. We cannot accept this demand in good conscience.
The Pentagon’s reaction was harsh. Deputy Defense Secretary Emil Michael called the Anthropic CEO a scammer on social media, claiming he has a messianic complex and is joking with national security.
What happened next was unexpected. Over 400 employees from OpenAI and Google signed a joint open letter titled “We Will Not Be Divided.” The letter pointed out that the Pentagon was exploiting divide-and-conquer tactics, arranging separate deals with individual AI companies and trying to pressure others into accepting conditions Anthropic refused.
OpenAI’s CEO also sent an internal memo to all staff, stating that OpenAI shares the same red lines as Anthropic: no large-scale surveillance, no development of lethal autonomous weapons. Just days earlier, they weren’t even on the same page, but suddenly, these companies united to oppose Pentagon pressure. This solidarity seemed to be a victory of ethical principles.
Ethical red lines: Anthropic stands firm, OpenAI looks the other way
But this unity only lasted a few hours. When the deadline passed, Anthropic refused to sign. A company valued at $380 billion was willing to risk losing a $200 million Pentagon contract.
Washington’s response was not just commercial. Trump posted on Truth Social an hour later, calling Anthropic “leftist lunatics” and accusing them of trying to put themselves above the Constitution, joking about risking American soldiers’ lives. He demanded all federal agencies immediately stop using Anthropic’s technology.
Later, Defense Secretary Hegseth classified Anthropic as a “supply chain security risk”—a label usually reserved for companies like Huawei. The clear message: all contractors working with the U.S. military are forbidden from using Anthropic’s products. Anthropic announced it would sue.
That same night, OpenAI—previously holding a similar stance—signed an agreement with the Pentagon. The question is: what does OpenAI get in return?
First, a monopoly position: becoming the AI provider for the military’s secret network after Claude was excluded. But OpenAI also set three conditions for the Pentagon: no large-scale surveillance, no development of autonomous weapons, and all high-risk decisions must involve human oversight.
The U.S. Department of Defense said it accepted these terms.
Exactly—these are the same conditions Anthropic hesitated to accept for weeks, but when another company proposed them, negotiations concluded within days. However, the two solutions aren’t identical. Anthropic added an extra layer of protection: they argue current laws can’t keep up with AI capabilities. For example, AI could legally purchase and synthesize location data, browsing history, and social media info, effectively enabling surveillance without breaking the law. Anthropic believes simply writing “no surveillance” isn’t enough; legal loopholes must be addressed.
OpenAI didn’t insist on this point. They accepted the Pentagon’s view that current laws are sufficient. But if you think this is just a matter of contractual differences, you’re mistaken. Behind the numbers and deals lies an ideological battle.
David Sacks, the “AI emperor” of the White House, publicly criticized Anthropic for developing “woke AI”—pursuing political correctness over performance. A senior Pentagon official told the media that Anthropic’s problem is driven by ideology. Elon Musk’s xAI, a direct competitor of Anthropic, has been attacking it on X, claiming Anthropic “hates Western civilization.” Notably, last year, Anthropic’s CEO did not attend Trump’s inauguration, while OpenAI’s CEO did.
The “kill the chicken to scare the monkey” effect and the future of AI companies
So, here’s a summary of what happened. The same ethical principles, the same red lines—Anthropic added an extra layer of protection, showed the wrong stance, and was labeled a national security threat on par with Huawei. OpenAI missed a layer, maintained good relations with the government, and secured the contract to exploit.
Is this a victory of principles or just their valuation?
Military contracts have been under scrutiny before. In 2018, over 4,000 Google employees signed a petition, and dozens resigned to oppose Google’s participation in Project Maven—the Pentagon’s drone video analysis AI project to help identify targets faster. Google eventually withdrew, not renewing the contract, simply leaving. The employees won.
Eight years have passed. The same debate reemerges. But this time, the rules of the game have changed entirely.
A U.S. company told me I could do business with the military, but two things were off-limits. The U.S. government’s reaction was to exclude it from the entire federal system. The “supply chain security risk” label causes far more damage than just losing a $200 million contract. Anthropic’s revenue this year is around $14 billion, so this contract is a small fraction. But this label means any company dealing with the U.S. military cannot use Claude.
These companies don’t need to agree with the Pentagon’s stance. They just need to do a simple risk assessment: continuing to use Claude might cost them government contracts; switching to another model poses no problem. The choice is extremely straightforward.
This is the real signal of the issue. Whether Anthropic can withstand it doesn’t matter. What matters is whether the next company dares to. They will weigh the costs of sticking to their ethical principles against the economic benefits, and make a rational decision. The “kill the chicken to scare the monkey” strategy—eliminating one company to warn others—is the real game at play.
Looking back at that image of India, where everyone is holding hands high, except two people clenched fists—perhaps that’s the true normal. The ethical principles of AI companies may be similar, but their hands don’t have to be on the same side—especially when politicians push to mock and punish those who don’t conform.