Anthropic Vs Department Of Defense Leaves Anthropic in Supply-Chain Risk Limbo
In Washington, DC, the latest Anthropic vs department of defense ruling did not end the dispute so much as freeze it in place. A three-judge appeals panel said Anthropic had not met the demanding standard needed to temporarily remove the Pentagon’s supply-chain-risk designation, leaving the company caught between two courts and two conflicting preliminary orders.
The case is now bigger than one company’s access to government systems. It has become a test of how far the executive branch can go in treating a major AI provider as a national-security risk, even as the company says the label is costing it business and limiting the use of its tools inside the federal government.
Why did the appeals court keep the Pentagon label in place?
The Washington, DC, panel said granting a stay would force the military to continue dealing with what it called an unwanted vendor of critical AI services during an active military conflict. The judges said they were wary of imposing on military operations or lightly overriding national-security judgments.
That language stands in tension with a separate ruling from San Francisco, where a lower-court judge found the Department of Defense likely acted in bad faith. In that court, the judge said the Pentagon was motivated by frustration over Anthropic’s proposed limits on how its technology could be used and by the company’s criticism of those limits. The San Francisco order led the Trump administration to restore access to Anthropic AI tools inside the Pentagon and across the rest of the federal government.
What does the split mean for Anthropic and the federal government?
The result is a legal limbo. The government sanctioned Anthropic under two different supply-chain laws with similar effects, and each court is handling only one of those designations. Anthropic has said it is the first US company to be designated under both laws, which are typically used against foreign businesses viewed as threats to national security. In this case, the designation has raised questions about whether a domestic AI company can be treated in the same way.
Anthropic spokesperson Danielle Cohen said the company is grateful the Washington, DC, court recognized the need to resolve the issues quickly and remains confident the courts will ultimately agree the supply-chain designations were unlawful. The Department of Defense did not immediately respond to a request for comment. Acting attorney general Todd Blanche took a harder line, calling the stay a victory for military readiness and saying military authority belongs to the Commander-in-Chief and Department of War, not a tech company.
How is Anthropic framing the human and business stakes?
Anthropic has argued in court that it lost business because of the designation. It has also said it is being punished for insisting that its Claude tool lacks the accuracy needed for certain sensitive operations, including deadly drone strikes without human supervision. That argument places the company’s fight in a broader debate over whether AI companies should be pressured to support uses they consider unsafe or technically unreliable.
The dispute has also taken on a wider political and operational meaning because the Pentagon is deploying AI in its war against Iran. In that setting, the Anthropic vs department of defense fight is not just about a label. It is about who gets to decide how AI is used when military operations and commercial technology intersect.
What do experts say about the wider significance?
Several experts in government contracting and corporate rights have said Anthropic has a strong case against the government, while also noting that courts sometimes decline to overrule the White House on national-security matters. Some AI researchers have said the Pentagon’s actions against Anthropic chill professional debate about how well AI systems perform and where they should not be used.
For now, the conflicting rulings leave the company in a tense middle ground: access restored in one court, restriction preserved in another, and no clear path yet for resolving the split. The supply-chain-risk label remains in place in Washington, DC, even as Anthropic continues to challenge it. In the end, the same question still hangs over the case: who controls the boundaries of AI inside the state, the company that built it or the government that wants to use it?