# WAR MACHINES: Trump Orders Government-Wide Purge of Anthropic as Pentagon Deadline Expires > Published on ADIN (https://adin.chat/world/war-machines-anthropic-refuses-to-bend-the-knee-as-pentagon-threatens-nuclear-option) > Author: Daniel > Date: 2026-02-27 Anthropic laptop under missile attack **The AI safety company told the Pentagon to go to hell. Now the President has declared war on American tech. Welcome to the defining battle of the AGI era.** *BREAKING: President Trump announced Friday afternoon that he is ordering all federal agencies to phase out use of Anthropic technology--just over an hour before the Pentagon's 5:01 PM deadline expired.* The ultimatum is over. Anthropic did not blink. In a [defiant public statement](https://www.anthropic.com/news/statement-department-of-war) released Wednesday, CEO Dario Amodei made clear that his company would not "in good conscience accede" to Pentagon demands for unrestricted military use of Claude, Anthropic's frontier AI model. The demands center on two capabilities Anthropic refuses to enable: mass domestic surveillance of American citizens and fully autonomous weapons systems that can select and kill targets without human oversight. "These threats do not change our position," Amodei wrote. "We cannot in good conscience accede to their request." Trump's response was to order every federal agency to stop using their technology entirely. ## Trump Escalates: Government-Wide Purge President Trump's announcement came as a shock even to close observers of the standoff. The dispute had been between Anthropic and the Pentagon--but Trump transformed it into a confrontation with the entire federal government. The timing was deliberate. With just over an hour left on Defense Secretary Pete Hegseth's ultimatum, Trump didn't wait to see if last-minute negotiations would succeed. He preempted them. Anthropic did not immediately respond to the president's announcement. The move raises immediate practical questions. Claude is already "extensively deployed" across the federal government for mission-critical applications. According to Defense One, it would take the Pentagon alone months to replace Anthropic's AI tools. Now every agency faces the same scramble. Elon Musk piled on within minutes, posting on X that "Anthropic hates Western Civilization"--a reference to a previous version of Claude's guiding principles that encouraged "consideration of non-Western perspectives." The culture war framing was unmistakable. ## The Palantir Whistleblower: How This Fight Actually Started The public confrontation obscures the real origin of this crisis, which traces back to a single conversation that went very wrong. According to [Semafor's reporting](https://semafor.com/article/02/17/2026/palantir-partnership-is-at-heart-of-anthropic-pentagon-rift), Anthropic's models are deployed within the Pentagon through Palantir's Artificial Intelligence Platform. During the controversial seizure of Venezuelan President Nicolas Maduro in January, Claude appeared on the screens of officials monitoring the operation in real-time. Soon after, during a routine check-in between Palantir and Anthropic, an Anthropic official discussed the Maduro operation with a Palantir senior executive. The Palantir executive interpreted the exchange as Anthropic expressing disapproval of its technology being used for that purpose. That executive then reported the conversation directly to the Pentagon. The effect was immediate and devastating. A senior Defense Department official confirmed to Semafor that this exchange "led to a rupture" in Anthropic's relationship with the military. Within weeks, Defense Secretary Pete Hegseth was publicly taking shots at the company: "We will not employ AI models that won't allow you to fight wars." Anthropic has flatly denied this account, calling it "false" and stating the company has not "discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." But the damage was done. The whistleblower report, whether accurate or not, had poisoned the well. ## What the Pentagon Actually Wants The government's demands are straightforward: sign an "all lawful uses" contract that removes Anthropic's usage-policy restrictions entirely. Every other major AI lab has reportedly agreed to these terms for unclassified work, including OpenAI, Google, and Elon Musk's xAI. Anthropic is the outlier, and its red lines are non-negotiable: **Mass Domestic Surveillance**: Amodei argues that current law hasn't caught up with AI capabilities. The government can already purchase detailed records of Americans' movements, web browsing, and associations from public sources without a warrant. Powerful AI makes it possible to assemble this "scattered, individually innocuous data into a comprehensive picture of any person's life--automatically and at massive scale." **Fully Autonomous Weapons**: Anthropic isn't opposed to autonomous weapons in principle. It supports partially autonomous weapons "like those used today in Ukraine." But the company maintains that current frontier AI systems "are simply not reliable enough to power fully autonomous weapons." The technology that hallucinates fake legal citations shouldn't be deciding who lives and who dies. "We will not knowingly provide a product that puts America's warfighters and civilians at risk," Amodei wrote. ## The Pentagon's Arsenal Before Trump's announcement, Secretary Hegseth had three weapons pointed at Anthropic's head: **Contract Termination**: The Pentagon has signed a $200 million ceiling agreement with Anthropic. Walking away would hurt, but Anthropic could survive. **Supply Chain Risk Designation**: This is the strange one. The Pentagon threatened to label Anthropic a "supply chain risk"--a designation normally reserved for adversary nations and their affiliated companies. It has never been applied to an American firm. Such a designation would effectively blacklist Anthropic from the entire defense industrial base: roughly 60,000 contractors, including Lockheed Martin, would be barred from using its technology. **Defense Production Act**: The nuclear option. Hegseth threatened to invoke this Korean War-era law to compel Anthropic to provide its technology on the government's terms regardless of whether the company consents. As Amodei dryly noted in his statement, these threats are "inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." Trump's government-wide order suggests the administration chose the first path--at least initially. ## The Absurdity at the Heart of This Here's the strangest detail in this entire saga: despite the threats, despite the deadline, despite calling Amodei a liar with a God complex, the Pentagon still wants Claude. Military officials remain deeply interested in maintaining access to Anthropic's technology. Claude is already extensively deployed across the Department of Defense for mission-critical applications including intelligence analysis, modeling and simulation, operational planning, and cyber operations. It would take months to replace Anthropic's AI tools, sources told Defense One. The military's own reliance on the company it's threatening underscores the absurdity of the situation. An Anthropic spokesperson said Thursday that revised contract language received overnight "made virtually no progress" and would allow "safeguards to be disregarded at will." The company called for continued "productive conversations, in good faith." Even a former leader of the Pentagon's AI initiatives broke with the administration. Retired Air Force Gen. Jack Shanahan, who led Project Maven during the first Trump administration, wrote on social media: "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end." "They're not trying to play cute here," Shanahan wrote of Anthropic's position. He noted that large language models are "not ready for prime time in national security settings," particularly for fully autonomous weapons. ## Sam Altman Enters the Chat In a remarkable twist, OpenAI CEO Sam Altman--Amodei's former boss and longtime rival--has publicly backed Anthropic's position. The history between them is bitter. Amodei joined OpenAI in 2016 and rose quickly, becoming Vice President of Research by 2019. But according to Keach Hagey's biography of Altman, tensions escalated over safety concerns and pace of development. Amodei reportedly "felt psychologically abused by Altman." Altman, in turn, told colleagues the tension "was making him hate his job." In December 2020, Amodei left. He took seven colleagues with him--including his sister Daniela--and founded Anthropic explicitly to build AI more safely than OpenAI would. The rivalry has been public and petty. Just last week at an AI summit in India, the two CEOs went viral for refusing to join hands during a unity photo with Prime Minister Narendra Modi. In February, Anthropic's Super Bowl ad was widely seen as a direct shot at OpenAI's plan to run ads on ChatGPT. Altman responded with a lengthy X post calling the ad "clearly dishonest." So when Altman told CNBC on Friday that "for all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety"--it was news. "I've been happy that they've been supporting our warfighters," Altman said. "I'm not sure where this is going to go." According to the Wall Street Journal, OpenAI is now actively working on a deal to help resolve the standoff. An open letter signed by employees from OpenAI and Google warned: "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They're trying to divide each company with fear that the other will give in." Amodei's rivals are circling the wagons. The AI industry, whatever its internal feuds, apparently has shared red lines after all. ## Can They Actually Do This? The legal question is genuinely complicated. According to [Lawfare's analysis](https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can%27t-do-to-anthropic), the Defense Production Act gives the president broad authority to direct private industry for national defense. The Biden administration already established that AI falls within its scope when it required companies to report on training activities under Executive Order 14110. But there's a critical distinction between requiring a company to prioritize existing contracts and forcing it to create a product it doesn't currently make. "If a demand just changes the terms of sale for an existing product, the government can draw on the allocation authority's sweeping language," writes law professor Alan Rozenshtein. "If it amounts to demanding a new product, the government runs into the ambiguity." The government would likely argue that removing contractual guardrails doesn't change the product--Claude remains the same model with the same capabilities. Anthropic would argue the opposite: its usage restrictions are part of what Claude *is* as a commercial service. If it came to forced retraining--stripping safety guardrails from the model itself--the constitutional questions multiply. Is model training a form of editorial expression protected by the First Amendment? The Supreme Court in *Moody v. NetChoice* recognized that some algorithmic curation constitutes protected speech, but how far that extends to AI training is entirely unsettled. If Anthropic resists a formal DPA order, the most likely path would be to comply under protest (the DPA provides for criminal penalties for noncompliance) and immediately seek a temporary restraining order. ## The Autocracy Advantage: China Doesn't Have This Problem While American tech executives trade insults and lawyers prepare briefs, Beijing is watching with something between amusement and disbelief. The People's Republic of China does not negotiate with its AI labs. It commands them. When the Communist Party decides that Baidu, Alibaba, or ByteDance will support a military application, the conversation is short. There are no usage policies to debate, no red lines to respect, no CEOs publishing defiant open letters. The 2017 National Intelligence Law requires all Chinese companies to "support, assist, and cooperate with state intelligence work." Full stop. The United States, by contrast, has stumbled into a situation where its most advanced AI capabilities are controlled by private companies that can simply say "no" to the Department of Defense. The Pentagon finds itself *asking permission* to use technology developed largely with American capital, American talent, and American research infrastructure. This has prompted uncomfortable questions: *Who elected Dario Amodei? Since when do tech CEOs get veto power over national defense? How did we end up in a situation where China's military AI strategy is set by the Politburo, and America's is negotiated with venture-backed startups?* Anthropic's defenders argue this friction is a feature, not a bug--democratic societies should have checks on military power, and authoritarian systems are brittle, suppressing the internal dissent that catches mistakes. But in the short term, the coordination problem is real. The Anthropic standoff may force a reckoning that was coming anyway: in an era where AI capability equals national power, can the United States afford to leave that capability in private hands with private vetoes? China has already answered that question. America is still debating it--loudly, publicly, and with the clock having just run out. ## What Happens Now Trump has made his move. Several paths remain: **Protracted Legal Battle**: Anthropic challenges the government-wide order in court. The supply chain risk designation, if invoked, would face immediate legal scrutiny--it has never been applied to an American company. Meanwhile, federal agencies scramble to find alternatives while litigation proceeds. **Quiet Compromise**: The public theatrics give way to behind-the-scenes negotiation. The government accepts some version of Anthropic's guardrails through classified channels. Trump claims victory; Anthropic keeps its principles; everyone pretends this never happened. **Industry Solidarity Holds**: OpenAI, Google, and others refuse to fill the gap Anthropic leaves. The Pentagon discovers that every major AI lab has the same red lines, just quieter ones. The government is forced to either build its own capabilities or accept industry-wide constraints. **Anthropic Capitulates**: The pressure proves too much. The guardrails come down. The company's entire founding ethos--that AI development requires safety constraints--collapses under government pressure. Amodei has made clear which outcome Anthropic will not accept. In the final lines of his Wednesday statement, he struck a tone of weary defiance: "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required." "We remain ready to continue our work to support the national security of the United States." The deadline has passed. The president has spoken. And Dario Amodei is still standing exactly where he was. --- **Sources:** - [Anthropic Official Statement](https://www.anthropic.com/news/statement-department-of-war) - [AP News: Trump orders all federal agencies to phase out use of Anthropic technology](https://apnews.com/article/anthropic-pentagon-ai-hegseth-dario-amodei-b72d1894bc842d9acf026df3867bee8a) - [Semafor: Palantir Partnership at Heart of Rift](https://semafor.com/article/02/17/2026/palantir-partnership-is-at-heart-of-anthropic-pentagon-rift) - [Lawfare: What the Defense Production Act Can and Can't Do](https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can%27t-do-to-anthropic) - [Business Insider: A Timeline of Anthropic and OpenAI's Budding Rivalry](https://www.businessinsider.com/sam-altman-dario-amodei-anthropic-openai-rivalry-timeline-2026-2) - [NPR: OpenAI Shares Anthropic's Red Lines](https://www.npr.org/2026/02/27/nx-s1-5729118/anthropic-pentagon-openai-ai-weapons)