The Pentagon gave Anthropic, the makers of Claude, an ultimatum: allow the military unfettered access to its AI model, even if the potential uses violate the company’s own safeguards, or it will face significant punishment. Anthropic refused, with CEO Dario Amodei saying the company “cannot in good conscience accede” to the Department of Defense’s requests. Now, Sam Altman, CEO of OpenAI, would like you to know that he also would have acted in a principled manner if he ever had to.
In a memo circulated to OpenAI employees Thursday night—and definitely not cynically leaked to the press so that everyone knows what a brave boy Sam Altman is—the founder and CEO told his firm, “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
Those are, of course, the exact same red lines that Anthropic has held to and that have reportedly been an issue for the Pentagon despite its insistence that it only wants to use AI for “all lawful purposes.” The best read on that position is that there aren’t currently laws preventing the use of autonomous weapons, and the Pentagon would like to use Claude to deploy its arsenal in some way. Anthropic has already reportedly created a carveout in its red line policy that would allow the Defense Department to use Claude for “defensive weapons,” but that clearly wasn’t enough for the agency, or else this standoff would already be over.
Altman adopting Anthropic’s no-go policies doesn’t even put him second in line behind Amodei in drawing a line in the sand with the military. More than 100 employees at Google beat him to the punch, signing and sending a letter to management asking the company to adopt the same red lines as Anthropic if the company is going to continue to do business with the Pentagon. But that must have at least been a big enough gust for Altman to feel where the wind was blowing and plant his flag.
To Anthropic’s credit, it did actually have to stare down Pentagon pressure, the details of which have continued to trickle out over the course of the week. The latest detail, reported by the Washington Post, included the Department of Defense peppering the company with hypotheticals like whether Claude could be used to shoot down an intercontinental ballistic missile launched at the United States. (Anthropic’s CEO reportedly told the DoD to call and ask, which was not a satisfying answer to the Pentagon, though Anthropic denies it. A recent study found that chatbots, including Claude, launch nuclear weapons in 95% of war games, so making a call seems like the least of the potential problematic outcomes.)
The company didn’t budge, even though the Pentagon threatened to cancel Anthropic’s government contracts, declare Anthropic a “supply chain risk,” and/or invoke the Defense Production Act to force the company to build a model for the military’s desired purposes. It does seem like those threats might have been more bluster than anything, which Anthropic probably anticipated, based on the fact that Bloomberg reported the Pentagon is still open to negotiating. The company also has more leverage than one might imagine, given that the Defense Department favorite Palantir has cloud infrastructure that relies on Anthropic’s model to operate.
With its biggest guns unable to get Anthropic to cave, the Pentagon has resorted to a new approach: petty insults. Undersecretary of Defense Emil Michael spent most of Thursday blasting Anthropic on Twitter, calling Amodei a “liar” with a “God complex” and claiming that Anthropic’s CEO wanted to “personally control the US Military.” He also alleged that Anthropic’s constitution for Claude, a document that dictates how the chatbot should act, was actually the company trying to position its own rules to supersede the US Constitution, which is not how that works at all.
It’s a bit hard to imagine the Department of Defense winning this stand-off in the court of public opinion, given that Anthropic’s position is “let’s agree not to spy on Americans or let AI nuke people,” and the Pentagon’s response was “No.” But it sure seems the agency wants to have the fight for all to see, for whatever reason.
Read the full article here
