CoinWorld News, on February 25, Vitalik Buterin posted in support of AI company Anthropic's adherence to two ethical bottom lines: "not developing fully autonomous weapons" and "not engaging in large-scale surveillance in the United States," praising their determination to withstand government pressure. Vitalik believes that in an ideal world, such high-risk applications should be limited to open-source LLM access levels (i.e., accessible to everyone equally); even achieving only 10% progress could reduce the risks of autonomous weapons and privacy violations, promoting safer AI development. Previously, reports indicated that the Pentagon recently threatened to cut off cooperation with Anthropic, potentially resulting in a loss of a $200 million contract, because Anthropic refused to provide AI technology for military use without human intervention.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
CoinWorld News, on February 25, Vitalik Buterin posted in support of AI company Anthropic's adherence to two ethical bottom lines: "not developing fully autonomous weapons" and "not engaging in large-scale surveillance in the United States," praising their determination to withstand government pressure. Vitalik believes that in an ideal world, such high-risk applications should be limited to open-source LLM access levels (i.e., accessible to everyone equally); even achieving only 10% progress could reduce the risks of autonomous weapons and privacy violations, promoting safer AI development. Previously, reports indicated that the Pentagon recently threatened to cut off cooperation with Anthropic, potentially resulting in a loss of a $200 million contract, because Anthropic refused to provide AI technology for military use without human intervention.