Within the span of a few hours on Friday, the Pentagon dropped its deal with Anthropic after the latter refused to budge on safety guardrails regarding the use of its AI in surveillance or fully autonomous weapons without human oversight, then designated the company as a supply chain risk, before signing an agreement with OpenAI instead. All of this also took place just hours before U.S. military strikes started raining down on Tehran.
The deal between the Department of Defense and OpenAI led to intense backlash from the general public, who largely viewed it as OpenAI caving to the Trump administration’s requests. Meanwhile, Claude rose to the top spot on the App Store, and users are calling for a boycott of OpenAI.
OpenAI said that its technology won’t be used for mass domestic surveillance or to power direct autonomous weapons systems. The actual details of the contract and how those limitations will be implemented were not made public, but OpenAI executives shared some information in an ask-me-anything style open forum on X over the weekend.
OpenAI’s head of national security partnerships, Katrina Mulligan, said that the contract allows the Pentagon to use OpenAI’s technology “for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” In a separate post, she clarified that OpenAI intended “applicable law” to mean “the law applicable at the time the contract is signed.”
Mulligan also said that the contract only applies to defense and will not allow its use by domestic law enforcement.
“If this contract were with domestic law enforcement or the NSA, we would have required different contract provisions, but nothing in U.S. law allows the Department of War to conduct domestic surveillance,” Mulligan said.
But even CEO Sam Altman admitted that the deal was “rushed,” and that the “optics don’t look good.”
“I have accepted that the US military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it,” Altman said in a post on X. “On the other hand, I also respect the democratic process. I don’t think this is up to me to decide.”
There is a significant amount of trust in the government coming from OpenAI leadership.
“U.S. law already constrains the worst outcomes,” Mulligan wrote, while Altman said that the U.S. government is “an institution that does its best to follow law and policy.”
But if mass surveillance scandals in very recent history tell us anything, the American government can find wiggle room in existing constraints if need be. The possibility of unconstitutionality of military strikes has also not stopped the Trump administration in the past from going full speed ahead with said strikes, such as in the case of the highly contested boat strikes in the Caribbean late last year that appear to meet the definition of a war crime, according to the ACLU. Additionally, the U.S. Congress hasn’t exactly been in a hurry to write laws that take the existence of AI into consideration.
The executives also claim that the deal offered to OpenAI was different from that offered to Anthropic.
“I think Anthropic may have wanted more operational control than we did,” Altman said. “We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn’t ethical in the most important areas.”
Instead, the company will have forward-deployed engineers helping monitor the Pentagon’s use of its technology, OpenAI executives said on X. Selling the model with technical controls, Mulligan said, is “often more reliable than contract clauses,” like the provisions that Anthropic sought from the Pentagon, but an unnamed source told The Verge on Monday that the impact of these safeguards is limited.
OpenAI’s former geopolitics researcher Sarah Shoker also shared in a Substack post on Saturday that the defense industry lacks consensus on what adequate human supervision in autonomous weapons actually means in practice, which could be where Anthropic disagreed with the Pentagon while OpenAI did not.
While OpenAI executives took to X to defend the company’s decision, dozens of employees have taken a more critical approach.
Before the announcement that OpenAI had secured the Pentagon deal, 96 employees signed an open letter asking company leadership to “continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
Many OpenAI employees, including VPs and department heads, have also signed an open letter addressed to the Pentagon to withdraw the supply chain risk designation for Anthropic.
And at least one research scientist has taken to X to voice disagreement with the contract.
“I personally don’t think this deal was worth it,” OpenAI research scientist Aidan McLaughlin said in a post on X, adding that there is an “overwhelming” amount of internal discussion on the decision.
If you currently work or have previously worked at OpenAI, we would love to hear from you. Reach out to us at [email protected]
Read the full article here
