
A contract dispute between Anthropic and the U.S. Department of Defense escalated this week after CEO Dario Amodei said the company “cannot in good conscience accede” to the Pentagon’s request to grant unrestricted access to its artificial intelligence systems.
The confrontation centers on a Friday deadline of 5:01 p.m. ET set by Defense Secretary Pete Hegseth. Hegseth issued the ultimatum on Tuesday after meeting with Amodei at the Pentagon, according to a source familiar with the discussion who spoke to the BBC. A senior Pentagon official confirmed that Anthropic had until Friday evening to comply or face consequences.
Anthropic develops the AI chatbot Claude and was one of four companies awarded Pentagon contracts last summer worth up to $200 million (£148 million) each. The other contract recipients were OpenAI, Google, and xAI, the firm behind the chatbot Grok. Anthropic was the first tech company approved to work in the Pentagon’s classified military networks and is currently the only frontier AI lab with classified-ready systems for the military. The Defense Department is reportedly preparing xAI for similar classified work.
The dispute concerns two conditions Anthropic says must remain in its contract: preventing the use of Claude for mass surveillance of Americans and prohibiting fully autonomous weapons that operate without human involvement. Amodei wrote Thursday that “in a narrow set of cases, we believe AI can undermine, instead of defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” He identified those cases as “mass surveillance of Americans” and “fully autonomous weapons with no human in the loop.”
This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which… https://t.co/VHbtzWujDA
— Senior Official Jeremy Lewin (@UnderSecretaryF) February 27, 2026
A source told the BBC that Anthropic’s red lines include autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention. The source described the tone of Tuesday’s meeting as cordial. An Anthropic spokesperson said Amodei “expressed appreciation for the Department’s work and thanked the Secretary for his service” during the discussion.
Anthropic says negotiations on these safeguards have been underway for months. In a statement to TechCrunch, the company said contract language received overnight from what it referred to as the “Department of War” “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” The company added: “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW’s recent public statements, these narrow safeguards have been the crux of our negotiations for months.”
Anthropic said it is continuing good-faith conversations and has not ended negotiations. “We continued good-faith conversations about our usage policy to enable Anthropic to continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” the company said in a separate statement.
Pentagon officials have rejected claims that they intend to use AI for domestic surveillance or fully autonomous weapons. Sean Parnell, the department’s top spokesman, wrote on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” In another post, he said the Pentagon seeks to “use Anthropic’s model for all lawful purposes” and warned that opening access would prevent the company from “jeopardizing critical military operations.” He also wrote, “We will not let ANY company dictate the terms regarding how we make operational decisions.”
A senior Pentagon official told the BBC that the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.
Hegseth’s ultimatum included additional warnings. Military officials said that if Anthropic does not comply, the department could cancel its contract, designate the company a supply chain risk, or invoke the Defense Production Act. The supply chain risk label is typically used for foreign adversaries and could disrupt Anthropic’s partnerships. The Defense Production Act, a Cold War-era law, gives the president authority to compel companies to prioritize or expand production for national defense. A Pentagon official told the BBC that invoking the DPA could compel Anthropic executives to grant unrestricted use of their products on national security grounds.
Amodei described the threat to both label Anthropic a supply chain risk and invoke the Defense Production Act as contradictory. “One labels us a security risk; the other labels Claude as essential to national security,” he wrote.
Anthropic has partnerships with companies including Palantir. Sources told the BBC that Claude was used through a contract with Palantir during the January operation that led to the capture of former Venezuelan President Nicolás Maduro.
The company has positioned itself as focused on AI safety and regularly publishes safety reports. One report last year acknowledged that its AI technology had been “weaponised” by hackers who used it to conduct sophisticated cyber-attacks. That image faced scrutiny after reports that Claude was used during the Maduro operation.
Defense undersecretary for research and engineering Emil Michael criticized Amodei on X, alleging that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.” Michael has previously said the Pentagon wants OpenAI, Google, xAI, and Anthropic to “be able to use any model for all lawful use cases.”
Members of Congress have also commented on the dispute. Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.” He told reporters, “Why in the hell are we having this discussion in public? This is not the way you deal with a strategic vendor that has contracts.” Tillis added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading U.S. company.” He said the episode “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
Retired Air Force Gen. Jack Shanahan, who led Project Maven during the first Trump administration, also addressed the situation on social media. Shanahan previously oversaw Maven, a project to use AI to analyze drone footage and target weapons. Google employees protested the company’s involvement in Project Maven, leading Google to decline renewal of the contract and pledge not to use AI in weaponry. Shanahan wrote, “Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here. Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.” He said Claude is already widely used across the government, including in classified settings, and described Anthropic’s red lines as “reasonable.” He also wrote that large language models “are not ready for prime time in national security settings,” particularly for fully autonomous weapons, adding, “They’re not trying to play cute here.”
The dispute unfolds as Hegseth has taken steps affecting legal leadership within the department. In February, weeks after becoming defense secretary, he told Fox News that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.” That month he fired the top lawyers for the Army and the Air Force without explanation. The Navy’s top lawyer had resigned shortly after the election in late 2024.
As the Friday 5:01 p.m. ET deadline approaches, Amodei wrote that it is “the Department’s prerogative to select contractors most aligned with their vision,” adding, “given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” He said that if the Pentagon chooses to offboard Anthropic, the company “will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
