Anthropic, which develops the Claude family of advanced AI models, challenged Defense Secretary Pete Hegseth's March 2026 determination that procuring AI services from the company presents a supply-chain risk to national security. The dispute arose after Anthropic refused to contractually authorize the Defense Department to use Claude for mass domestic surveillance or lethal autonomous warfare, prompting the agency to cancel contracts and remove Claude from its systems.
The panel acknowledged Anthropic would 'likely suffer some degree of irreparable harm' but found the company's interests 'seem primarily financial in nature.' The court noted that while Anthropic documented 'potentially significant financial losses,' CEO Dario Amodei had publicly stated the 'vast majority' of customers would be 'unaffected' by the designation. The judges also pointed to evidence that Anthropic had 'financially benefited from its refusal' to comply with Pentagon demands, with one report calling it 'the best marketing spend in Silicon Valley for years.'
The case stems from a broader conflict between Anthropic and the Defense Department over AI usage policies. The company had been providing Claude to the military since 2024, but tensions escalated when the Pentagon sought broader authorization for the AI system's deployment. Secretary Hegseth's determination under 41 U.S.C. § 4713 effectively bars Anthropic from federal contracts, though it doesn't prohibit contractors from using Claude for non-Defense work.
The panel granted expedited review with oral arguments scheduled for May 19, 2026, directing parties to address jurisdictional questions under 41 U.S.C. § 1327 and whether Anthropic can control its AI models' functioning after delivery. The court noted the case raises 'novel and difficult questions' about what constitutes a supply-chain risk and urged limiting acronym use in briefs to enhance clarity.