Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

Justice Department Says Anthropic Can’t Be Trusted With Defense Systems

Federal court hears dispute over Pentagon blocking AI company from warfighting contracts

Technology

San Francisco: The US Department of Justice has told a federal court that it did not violate Anthropic’s rights by blocking the AI company from defense contracts. The government believes Anthropic cannot be trusted with warfighting systems.

Government lawyers say Anthropic cannot impose its own rules on how the Pentagon uses AI technology. They argue that Anthropic’s First Amendment rights were not violated when authorities labeled the company a supply-chain risk.

The dispute began when Anthropic tried to limit how the Pentagon could use its Claude AI models. The company believes its technology should not be used for broad surveillance of Americans or to power fully autonomous weapons.

Pentagon officials say they acted because they feared Anthropic might sabotage or interfere with military operations. They worry the company could disable its technology during critical warfighting situations.

Anthropic could lose billions in revenue if the designation stands. However, the government says the company’s financial concerns are not strong enough to stop the current measures.

The Department of Defense is replacing Anthropic’s tools with alternatives from Google, OpenAI, and xAI. This transition is happening as military operations continue using classified systems.

Several organizations have supported Anthropic in court, including Microsoft and AI researchers. Anthropic must respond to the government’s arguments by Friday as the legal battle continues.

Image Credits and Reference: https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/