Falling into the status of an "AI pawn"? The U.S. Department of Defense issues a final ultimatum. Anthropic faces a "life-or-death decision"!

robot
Abstract generation in progress

According to sources, U.S. Secretary of Defense Lloyd Austin has issued an “ultimatum” to Dario Amodei, CEO of Anthropic, demanding that he remove safety measures from their AI models by Friday, or risk losing Pentagon contracts.

It is reported that during a meeting on Tuesday, Austin also threatened to blacklist the AI company. However, sources say the atmosphere remained friendly and respectful, with no loud disputes. They also added that Austin praised Anthropic’s products and expressed a desire to collaborate with the company.

Currently, the controversy centers on the restrictions Anthropic has placed on its AI model, Claude. Two sources revealed that the Pentagon has a $200 million contract with Anthropic, and wants the company to lift restrictions to allow military use of the model for “all lawful purposes.”

However, sources say Anthropic has concerns about two issues: AI-controlled weapons and large-scale domestic surveillance of U.S. citizens. An insider disclosed that the company believes AI is not reliable enough to control weapons, and there are currently no laws regulating AI’s use in mass surveillance.

It is also said that Anthropic has little time to consider its options. An insider revealed that if the company does not accept the terms discussed at Tuesday’s meeting, the Pentagon plans to terminate the contract before Friday.

A Pentagon official told the media that Anthropic must decide “by 5:01 p.m. on Friday” whether to agree or refuse. If the company declines, Austin will ensure that the Pentagon uses its products under the authority of the Defense Production Act, regardless of the company’s willingness.

The official also stated that Austin will designate Anthropic as a “supply chain risk.”

The Defense Production Act (DPA) grants the government authority to influence companies to protect national security. The Trump administration invoked this law during the COVID-19 pandemic. A “supply chain risk designation” would prohibit companies with military contracts from using Anthropic’s products in any military projects. This could significantly impact the AI firm, which is trying to expand into the enterprise sector, where many large companies hold military contracts.

In a statement after the meeting, Anthropic said, “We are engaged in sincere discussions regarding usage policies to ensure that Anthropic can continue to support government national security missions within the bounds of reliable and responsible AI.” Sources also said that Anthropic has no plans to relax restrictions on military applications.

Notably, the dispute with the Pentagon could open opportunities for competitors. Pentagon officials confirmed that Elon Musk’s xAI has “agreed to conduct projects in a confidential environment,” and other companies are “soon to agree” as well.

(Source: Cailian Press)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)