OpenAI’s recent dealings with the U.S. military have raised a lot of eyebrows. But it wasn’t that long ago that the AI company had a policy banning its models from being used by militaries or for warfare. Despite that ban, the Pentagon had been testing a version of OpenAI’s models through Microsoft in an apparent workaround as far back as 2023, according to Wired.
The tech news outlet reported Thursday, citing unnamed sources, that the Pentagon had been experimenting with Microsoft’s Azure OpenAI service that year. A Microsoft spokesperson confirmed to the outlet that Azure OpenAI became available to the U.S. government in 2023 and was subject to Microsoft’s terms of service. The spokesperson did not say exactly when the models became available to the Pentagon, but noted the service was not approved for “top secret” government workloads until 2025.
OpenAI and Microsoft did not immediately respond to a request for comment from Gizmodo.
The report highlights how quickly things have changed in the AI industry. Just a few years ago, companies at least tried to appear like they were avoiding military work. Now, many seem to be tripping over themselves to land Pentagon deals.
OpenAI’s usage policies once included a ban on “activity that has high risk of physical harm,” including areas such as “weapons development” as well as the “military and warfare.” But in January 2024, OpenAI quietly updated that section of its policy and removed the blanket ban on “military and warfare,” The Intercept reported at the time.
The removal of that ban paved the way for the AI company to sign its own multimillion-dollar contract with the Pentagon and kick off the current messy saga over which AI models the U.S. military will use in classified settings.
Last June, the company announced its OpenAI for Government initiative, aimed at bringing its most “advanced AI tools to public servants across the United States.” The program included a pilot with the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO), with a contract ceiling of $200 million. Then in February, OpenAI announced it had made a customized version of ChatGPT available through the Defense Department’s AI platform, GenAI.mil, for unclassified uses, joining xAI and Google’s Gemini on the platform.
Most recently, the company said last week that it had reached an agreement with the Pentagon to deploy its advanced AI systems in classified environments.
The announcement came just hours after the Defense Department’s negotiations with OpenAI rival Anthropic collapsed. At the time, Anthropic was the only AI company cleared to operate in the military’s classified systems.
The Pentagon wanted Anthropic to allow the military to use its models for “all lawful purposes” as it raced to expand its use of AI. Anthropic pushed back, insisting on guardrails to prevent its models from being used for domestic surveillance or fully autonomous weapons systems.
Defense Secretary Pete Hegseth wrote in a post on X that after the two sides failed to reach a deal Friday, he directed the department to designate Anthropic a supply-chain risk. The move would bar contractors, suppliers, or companies seeking to do business with the U.S. military from maintaining commercial ties with Anthropic.
Hours later, OpenAI announced its deal with the Pentagon, which even CEO Sam Altman acknowledged made the timing look “opportunistic and sloppy.”
Anthropic CEO Dario Amodei said in a statement Thursday that most of the company’s customers should not be affected by the designation and that Anthropic plans to challenge it in court. Microsoft became the first major company on Thursday to say it would continue offering Anthropic’s AI products to clients despite the designation.
Amodei also apologized for a leaked staff memo in which he wrote that President Donald Trump dislikes Anthropic because the company has not donated to him and has refused to give him “dictator-style praise.”
“It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views,” Amodei wrote. “It was also written six days ago, and is an out-of-date assessment of the current situation.”
Read the full article here
