Trump Orders Federal Agencies to Drop Anthropic Services Amid Escalating Pentagon Feud

President Donald Trump has directed all U.S. federal agencies to immediately cease using Anthropic’s AI technology, including its flagship Claude model, following a heated standoff with the Department of Defense over safeguards on military applications. The order includes a six-month phase-out period for the Pentagon and other reliant agencies, while the Defense Secretary designated Anthropic a supply-chain risk to national security, barring contractors from engaging with the company. This move caps a dispute where Anthropic refused to lift restrictions on uses like fully autonomous weapons and mass domestic surveillance, prompting threats of further civil and criminal consequences if the company does not cooperate during the transition. Rival OpenAI quickly announced a new agreement with the Pentagon to fill the gap in classified AI capabilities.

Trump’s Directive Shakes Federal AI Landscape

President Donald Trump announced the sweeping order via Truth Social, declaring that every federal agency must “IMMEDIATELY CEASE all use of Anthropic’s technology.” He emphasized that the United States “doesn’t need it, doesn’t want it, and will not do business with them again.” The president’s post highlighted his view that Anthropic, which he described using strong language as a group of “leftwing nut jobs,” had attempted to “strong-arm” the Department of Defense—referred to by Trump as the Department of War—by imposing its terms of service over constitutional priorities.

The core conflict stems from Anthropic’s insistence on maintaining strict guardrails for its Claude AI model, particularly in sensitive military contexts. The company sought assurances that its technology would not be deployed for fully autonomous lethal weapons systems or for mass surveillance of U.S. citizens. These red lines clashed with the Pentagon’s push for unrestricted access to support “all lawful purposes,” including advanced intelligence analysis, operational planning, and other national security functions.

Defense Secretary Pete Hegseth moved swiftly in response, designating Anthropic a “supply-chain risk to national security” shortly after Trump’s announcement. This label, typically applied to entities posing threats akin to foreign adversaries, prohibits any contractor, supplier, or partner doing business with the U.S. military from conducting commercial activity with Anthropic. The designation effectively extends the restrictions beyond direct government use, potentially isolating the company from a broad swath of defense-related contracts and partnerships.

Phase-Out Timeline and Compliance Pressures

The order provides a structured transition to minimize immediate disruptions:

Most federal agencies must halt use immediately.

The Department of Defense and other agencies with embedded Anthropic integrations receive a six-month phase-out window to migrate to alternative systems.

Trump warned that failure by Anthropic to assist cooperatively during this period could trigger “major civil and criminal consequences,” invoking the “full power of the Presidency” to enforce compliance.

This timeline acknowledges the practical challenges of replacing a model like Claude, which had become integral to certain classified networks as the first AI system cleared for such environments under a prior contract valued at up to $200 million.

Implications for National Security and AI Ecosystem

The feud underscores deeper tensions in how the U.S. government balances AI innovation with safety and ethical constraints. Anthropic’s principled stance on limiting high-risk applications—rooted in its founding emphasis on AI alignment and responsible development—collided with the administration’s drive for rapid, unrestricted adoption to maintain strategic advantages in global competition.

The immediate fallout includes potential complications for intelligence analysis, defense logistics, and other mission-critical functions reliant on advanced language models. Agencies now face accelerated procurement shifts, with increased scrutiny on vendor reliability and alignment with national security priorities.

In a notable development, OpenAI announced an agreement with the Defense Department hours after the Anthropic restrictions were detailed, positioning its technology—including models like those behind ChatGPT—for use in classified systems. This shift highlights the competitive dynamics in the AI sector, where government contracts represent significant validation and revenue streams.

Broader Ramifications for Private Sector AI Providers

The administration’s actions send a clear signal to other AI firms about expectations for cooperation with defense needs. By framing the dispute in terms of national security imperatives over corporate policies, the move could pressure companies to relax internal safeguards when engaging with federal entities.

Anthropic has indicated it may challenge the supply-chain risk designation legally, arguing it is “legally unsound.” The company’s position maintains that its restrictions align with broader U.S. interests in preventing misuse of powerful AI, even as the government prioritizes operational flexibility.

This episode marks one of the most aggressive interventions yet in the intersection of private AI development and military applications, reshaping how tech providers navigate government partnerships amid heightened geopolitical stakes.

Disclaimer: This is a news report based on publicly available information and does not constitute financial, legal, or investment advice.

Leave a Comment