Who:
Anthropic (maker of the Claude models, an AI lab founded by former OpenAI executives) is clashing with the U.S. Department of Defense. Emil Michael, the Under Secretary for Research & Engineering, is leading the review for the Pentagon, and Defense Secretary Pete Hegseth is backing a harder line. See the Defense Department bio.
What happened:
The Pentagon is reviewing its relationship with Anthropic and is considering labeling the company a "supply chain risk"-a designation usually reserved for foreign adversaries. Anthropic is pushing for written bans on fully autonomous weapons and mass domestic surveillance, while the Pentagon wants models available for "all lawful use." Read the reporting on axios.com.
Important nuance:
Other AI vendors - OpenAI (ChatGPT), Google (Gemini), and xAI (Elon Musk's startup) - agreed to "all lawful use" for unclassified systems. According to one senior official, one vendor has already agreed to that standard across all systems. None of these vendors are approved for classified use yet. More context at semafor.com.
Why this matters:
As of February, Claude is the only AI model running on the Pentagon's classified networks. "Classified" here means systems that handle secret or sensitive national security information and require tighter vetting, special security controls, and personnel clearances. Anthropic has shipped customized models for national-security users, which is a higher bar than unclassified access. See more at axios.com.
When and where:
The contract, worth up to $200 million, was awarded in summer 2025. Talks stretched into 2026 and escalated at a defense summit in West Palm Beach, Florida, on Tuesday, Feb. 17. Timeline details are available at axios.com.
Each side's line:
Anthropic describes the discussions as "productive conversations, in good faith," and says it remains committed to using frontier AI in support of U.S. national security. Pentagon officials counter that anything short of "all lawful use" could tie commanders' hands in a crisis. Reporting: axios.com.
What "supply chain risk" means:
This is a rare and severe label. If applied, it would force defense contractors to certify they do not use Anthropic's models. In practical terms, it would cut Anthropic out of large parts of government-driven business and partner channels. Explanation and implications at axios.com.
Backdrop:
The dispute is happening amid public criticism of Anthropic from political figures like David Sacks, who has called the company’s approach "woke AI." Politics are a factor but not the whole story. See background reporting at axios.com.
Why founders should care:
Guardrails land in the contract, not in a blog post. Expect government asks for "all lawful use" language and plan your redlines now. See context at semafor.com.
Government risk is binary. A supply-chain tag doesn't just cost a logo; it can wipe out your channel through prime contractors and subcontractors. More on the consequences at axios.com.
Competitor flexibility shapes deal flow. Rivals have conceded more (at least for unclassified use), which will shift partnerships and procurement momentum. See analysis at semafor.com.
Practical moves:
Pre-bake permissible-use clauses. Spell out what you will and won't support: surveillance limits, autonomy rules, targeting restrictions, and escalation pathways.
Separate SKUs and policies for unclassified versus classified products from day one. That makes compliance and contracting clearer.
Build a non-government revenue hedge so a policy stand doesn't crater your profit and loss statement.
Bottom line:
This fight is about control and trust. The Pentagon wants options in a crisis; Anthropic wants legal limits on harmful uses. The outcome will shape who gets to sell advanced AI to defense customers and on what terms.
Get daily insider tech news delivered to your inbox every weekday morning.