Since late February, AI research company Anthropic and the Pentagon have engaged in discussions regarding Anthropic’s concern over the use of its technology for things like mass surveillance and autonomous weapons.
On March 1, the most significant point of this discourse was revealed to the public through a New York Times report. According to the report, the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data. Although Anthropic was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act, the firm wanted a binding promise that the Pentagon wouldn’t use its technology on unclassified commercial data. This data refers to sensitive information owned by businesses or shared with the government that requires protection, often termed Controlled Unclassified Information (CUI). Examples include proprietary intellectual property, software code and sensory personally identifiable information.
Despite threats made by the Department of War, Anthropic CEO and cofounder Dario Amodei said the company will not accept the Pentagon’s final offer as they “cannot in good conscience accede to their request.” However, the Wall Street Journal reported that despite declaring that the United States military will end its use of Anthropic’s AI, the recent strikes in Iran were aided by those same tools.
Thus, these threats were brought to life: on behalf of President Trump, Secretary of Defense Pete Hegseth posted a “final” decision on the matter on March 4, designating Anthropic as a supply-chain risk to National Security. Effective immediately, this means that no partner doing business with the United States military may conduct any commercial activity with the AI firm.
In response, Anthropic will seek to challenge the move, saying that this designation would set a dangerous precedent for any American company that holds negotiations with the government.
OpenAI—the company behind ChatGPT—on the other hand, has thrived due to its opposite decision. Under its original deal, OpenAI agreed to let the Pentagon use its AI systems for any lawful purpose. However, it was forced to make changes after user backlash.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement,” OpenAI CEO Sam Altman said on X on March 2, in response to the new changes to the contract. “We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted.”
However, users on X noted a discrepancy with his claims. In a meeting with OpenAI employees on March 3, Altman stated that his company doesn’t get to choose how the military uses its technology. “So maybe you think the Iran strike was good and the Venezuela invasion was bad,” he said. “You don’t get to weigh in on that.”
Despite winning favor with the Department of War, OpenAI lost people’s support. In fact, ChatGPT’s daily uninstall rates went up by 295% after the military partnership was announced, while Anthropic’s Claude app made it to #1 on Apple’s App Store.
More importantly, though, they set a variety of ethical, legal and political concerns; the result sets not only a precedent for the relationship between private AI companies and their government, but a loss of trust from civilians.