Technology

Anthropic Rejects U.S. Defense Department’s Demand to Remove AI Safety Rules

Published on

Washington, Feb 27: Anthropic, an American AI startup, has said “no” to demands from the U.S. government. The Pentagon asked the company to remove the safety rules in its AI systems related to warfare and espionage. Anthropic CEO Dario Amodei made it clear that they will not agree to this. He stated that removing such protections from their AI tool Claude could risk undermining democratic processes.

Amodei emphasized that no matter how much pressure or threats come from the U.S., Anthropic will not compromise on safety measures. He added that discussions with the Pentagon have not made progress, though further talks with the U.S. government may happen soon. He stressed that Claude should not be used for spying on Americans or for lethal military operations.

Meanwhile, reports suggest that U.S. Defense Secretary Pete Hegseth is considering using the Defense Production Act to force Anthropic to provide unrestricted AI access. Anthropic’s models already include features relevant to military use, particularly in classified systems handling sensitive data. While these can be used for lawful defense and intelligence purposes, the Pentagon insists that safety restrictions must be lifted. This disagreement has led to tensions between Anthropic and the U.S. government.

Click to comment

Popular Posts

Copyright © 2017Hyderabad Headlines. Developed by SSIT Web-8143363500.