Pentagon AI deal rewritten: OpenAI bars U.S. surveillance after backlash

pentagon-ai-deal-rewritten:-openai-bars-us.-surveillance-after-backlash
Pentagon AI deal rewritten: OpenAI bars U.S. surveillance after backlash
openai-revises-pentagon-ai-deal-after-surveillance-backlash

OpenAI has amended its agreement with the U.S. Department of Defense, referred to in company statements as the Department of War (DoW), after public backlash over its use in classified military operations.

On March 2, 2026, CEO Sam Altman posted on X that OpenAI would add explicit language prohibiting the use of its AI systems for domestic surveillance of U.S. persons and nationals. The amendment also states that intelligence agencies such as the National Security Agency (NSA) will not use OpenAI’s systems without a new agreement.

The added clause reads:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Here is re-post of an internal post:

We have been working with the DoW to make some additions in our agreement to make our principles very clear.

1. We are going to amend our deal to add this language, in addition to everything else:

“• Consistent with applicable laws,…

— Sam Altman (@sama) March 3, 2026

Altman also wrote that if he received what he believed was an unconstitutional order, he would go to jail instead of following it.

He acknowledged the timing of the announcement. “One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

The agreement was first announced on Friday, February 27, shortly after President Donald Trump ordered U.S. government agencies to stop using Claude and other services from Anthropic. Anthropic began working with the U.S. government in 2024.

The Pentagon had been pressuring Anthropic to revise its contract to permit “all lawful use” of its AI, including mass surveillance and fully autonomous weapons. Anthropic refused and said that “no amount of intimidation or punishment” would change its position on mass domestic surveillance or fully autonomous weapons. The Defense Department then took steps to designate Anthropic as a “supply chain risk,” a classification typically applied to Chinese companies believed to be working with their government, and threatened to block contractors from using Anthropic’s products.

Altman said in conversations with U.S. officials that Anthropic should not be designated as a supply chain risk and that he hoped the Defense Department would offer Anthropic the same agreement OpenAI signed. During an AMA session on X over the weekend, he said he did not know the details of Anthropic’s agreement or how it differed from OpenAI’s, but if it had been the same, he believed Anthropic should have accepted it.

In an internal memo described by The Wall Street Journal, Altman outlined three red lines guiding OpenAI’s work with the DoW:

  • No use of OpenAI technology for mass domestic surveillance
  • No use of OpenAI technology to direct autonomous weapons systems
  • No use of OpenAI technology for high-stakes automated decisions, such as “social credit” systems

Later that Friday, OpenAI announced it had reached an agreement for classified AI deployment and said those red lines were included in the contract. Altman wrote on X:

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

OpenAI stated that its agreement contains more guardrails than any previous classified AI deployment, including Anthropic’s.

Under the contract, the Department of War may use the AI system “for all lawful purposes,” consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The agreement references the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and DoD Directive 3000.09 dated January 25, 2023.

The AI system cannot independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control. It also cannot assume high-stakes decisions requiring approval by a human decisionmaker under the same authorities. For intelligence activities, any handling of private information must comply with constitutional and statutory protections, and the system cannot be used for unconstrained monitoring of U.S. persons’ private information. Domestic law-enforcement use is restricted by the Posse Comitatus Act and other applicable law.

OpenAI described its deployment architecture as cloud-only, stating it is not providing “guardrails off” models or non-safety trained systems and is not deploying models on edge devices, where there could be a possibility of use for autonomous lethal weapons. The company retains full discretion over its safety stack and will have cleared forward-deployed engineers and safety researchers in the loop.

The Department of War plans to convene a working group of leaders from frontier AI labs, cloud providers, and the Department’s policy and operational communities. OpenAI will participate.

Public reaction followed quickly. On Reddit, a post titled “You’re now training a war machine. Let’s see proof of cancellation” received more than 32,000 upvotes. Similar posts in the ChatGPT and OpenAI subreddits received tens of thousands of upvotes, and the company faced criticism on Hacker News.

Sensor Tower reported that ChatGPT uninstalls rose sharply after the announcement, with daily average uninstall rates up by about 200 percent compared to normal levels. Another analysis cited a 295 percent day-over-day jump.

During this period, Anthropic’s Claude climbed to the number one position on Apple’s App Store Top Free Apps leaderboard, surpassing both ChatGPT and Google Gemini. Anthropic launched a memory import tool to make switching to Claude from another chatbot easier.

Although Trump ordered agencies to stop using Claude, CBS News reported that Claude was still in use in the U.S.–Israel war with Iran as of Tuesday. The Pentagon declined to comment on its dealings with Anthropic.

Additional reporting from The Verge cited a source familiar with negotiations who said OpenAI’s deal included fewer restrictions than Anthropic’s earlier proposal because of the phrase “any lawful use.” According to the source, the Pentagon would not back down from its desire to collect and analyze bulk data on Americans. The source said that if a use is technically legal, the U.S. military can use OpenAI’s technology to carry it out.

Ross Andersen of The Atlantic reported that during negotiations with Anthropic, the Pentagon inserted qualifying phrases such as “as appropriate” into proposed agreements, which could leave room for interpretation regarding mass domestic surveillance or fully autonomous killing machines.

Bloomberg reported that OpenAI is participating in a competition to develop software enabling drones to be controlled via voice. Anthropic also participated. Sarah Shoker, who led OpenAI’s geopolitics team for three years before leaving last June, wrote that questions about whether AI voice tools in a kill chain amount to helping build a weapon depend on interpretation. She added that definitions such as “human supervision,” “human in the loop,” and “meaningful human control” remain debated, and that policymakers have reinterpreted gaps in the law in the past.

Artificial intelligence is already used in military contexts, including logistics and intelligence analysis. Palantir Technologies provides data analytics tools to government customers for intelligence gathering, surveillance, counterterrorism, and military purposes. The UK Ministry of Defence recently signed a £240 million contract with the company.

Palantir’s AI-powered defense platform, Maven, integrates satellite data and intelligence reports, which can then be analyzed by commercial AI systems such as Claude. Louis Mosley, head of Palantir’s UK operations, said the system helps make “faster, more efficient, and ultimately more lethal decisions where that’s appropriate.”

Lieutenant Colonel Amanda Gustave, chief data officer for NATO’s Task Force Maven, said there is human oversight. “We are always introducing a human in the loop,” she said, adding that it “would never be the case” that an AI would “make a decision for us.”

Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic no longer working with the Pentagon, “the most safety-conscious actor” was “out from the room.”

“That is a real problem,” she said.