The US Department of Defense now has a new preferred AI partner in OpenAI, following a dramatic falling-out with Anthropic over the limits of artificial intelligence in military operations. The split has set a precedent — and a warning — for how AI companies are expected to engage with the federal government under the current administration.
Anthropic’s conflict with the Pentagon was rooted in two non-negotiable company principles: AI must not be used for mass domestic surveillance, and human beings must remain responsible for lethal force. The company presented these not as obstacles but as baseline ethical commitments any reasonable government partner should accept.
Instead of accepting those terms, Pentagon officials escalated the dispute, ultimately triggering a presidential directive ordering all federal agencies to halt use of Anthropic’s products. President Trump described Anthropic’s position as an attempt to override the Constitution — a framing that Anthropic flatly rejected.
OpenAI CEO Sam Altman then announced a Pentagon deal that he said contains those very same commitments on surveillance and weapons, framing his company as the responsible middle ground. He also called on the Pentagon to offer identical terms to all AI developers, in an implicit rebuke of the administration’s hard-line approach to Anthropic.
The company-wide reaction across the AI industry was telling. Hundreds of employees from OpenAI and Google signed a unified statement warning against government efforts to divide and conquer the industry. The episode has made clear that the ethics of AI deployment in military contexts will remain among the most contentious issues in tech for years to come.