The designation, historically applied only to foreign adversaries, bars defence contractors from using Anthropic’s AI in Pentagon-related work and could cost the company billions in projected 2026 revenue.
The lawsuits were filed in the U.S. District Court for the Northern District of California and the U.S. Court of Appeals in Washington, D.C., and argue that the Pentagon’s blacklisting is unlawful and violates Anthropic’s free speech and due process rights.
The company is seeking to have the designation overturned and to block federal agencies from enforcing the restrictions.
Anthropic executives warned the consequences could extend beyond defence contracts. Chief Financial Officer Krishna Rao said the government’s actions could reduce 2026 revenue by “multiple billions of dollars” and cause damage that would be “almost impossible to reverse.”
Head of Public Sector Thiyagu Ramasamy added that the blacklist is already harming commercial relationships and could wipe out projected public sector revenue exceeding $500 million.
Chief Commercial Officer Paul Smith noted that a partner shifted from Claude to a rival AI model for a Food and Drug Administration project, eliminating an expected $100 million revenue pipeline, while negotiations with financial institutions totaling around $180 million were disrupted.
The dispute stems from Anthropic’s refusal to grant unrestricted government access to Claude, citing ethical limits on autonomous weapons and mass domestic surveillance.
Defence Secretary Pete Hegseth criticised these restrictions, prompting President Donald Trump to order federal agencies to cease using Claude, while allowing the Pentagon six months to comply due to Claude’s integration into classified systems.
Anthropic maintains it remains committed to national security work. “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” a company spokesperson said.
A group of 37 engineers from Google and OpenAI filed an amicus brief supporting Anthropic, warning that the government’s actions could stifle innovation and debate over AI’s risks and applications.
The lawsuits mark the first time an American AI company has faced a formal supply-chain risk designation, setting a potential precedent for the tech industry’s relationship with the U.S. government.


Leave a Reply