Anthropic was a friend to US national security and suddenly it’s a threat now. This threat is now suing the Department of Defense, as the DoD sees the company as a supply chain risk. It was, as the fictional Mad Titan Thanos would say, "inevitable," after CEO Dario Amodei declared that they have “no choice” other than this.

 

If you don’t know, the company is known for its AI named Claude and landed a $200 million contract with the Department of Defense in July 2025. It was one of the four companies in the deal, with the others being Google, OpenAI, and xAI. The DoD would be able to use these companies' respective artificial intelligences for “a variety of mission areas,” according to this deal.

 


Advertisement


Before shaking hands with the US government, Anthropic wasn’t really a household name except for the alleys of Silicon Valley. It was either Elon Musk and his pet Grok or Sam Altman’s lover, ChatGPT, that came to mind when the term "AI" was mentioned to anyone aware of this technology. But things might change after this, and maybe people will actually start seeing this technology’s worth other than exploiting it for unnecessary usage like generating low-quality texts and designs.

 

We will now take a closer look at why this friendship between Anthropic and the US government turned cold.


Advertisement


The Anthropic-Department of Defense Conflict

Anthropic simply doesn't want Department to use its AI for “fully autonomous weapons” and “mass domestic surveillance." Interestingly, Dario has noted that the contract is barren of such terms and it should remain that way. That’s the only reason the company and DoD will be seeing each other in court now that the heat is turned up a notch on legal grounds. Though if you read Dario’s official statement, he sees his company and the Department standing with the same objective, which is to defend the “American people.”

 

Now, there’s no trust from either side if we look at the situation closely.


Advertisement


 

The US government thinks Anthropic wants to “personally control the US Military,” as is clear in one personal attack US Undersecretary for Defense Emil Michael did on Twitter, as you can see in the post below.

 

Same’s the case with Anthropic, as the company isn’t trusting the US government with its AI toy to invade human privacy and life. It clearly mentioned in one official statement that the technology isn’t “reliable enough to power fully autonomous weapons.”

 

And, for that matter, businessman-turned-US President Donald Trump fired the company “like dogs,” as he told POLITICO in a recent interview. That’s on the same day Dario announced the hard decision to step up against the Pentagon.

 

After this, the company labeled as a supply chain risk by the Pentagon is now making efforts in federal court to reverse this decision, calling the US government’s actions “unprecedented and unlawful.”

What’s Next Now That Anthropic has Filed the Case Against DoD?

The AI firm is seeking that the 'supply chain risk’ label be removed from it. That’s all. Moreover, it still is willing to work with the US government, but then it seems quite impossible until the latter maintains its stance and keeps looking at Anthropic as someone trying to take over the US military.

 

Lastly, Anthropic is likely to have an upper hand here, as the contract didn’t mention the terms of usage that the US government is seeking.

 

We are also eager to see if the other three companies with whom the US government has shaken hands for AI usage will have a say in this.