Further entrenching its position as spooks’ and soldiers’ go-to supplier for artificial intelligence, Palantir on Thursday announced that it will be adding Anthropic’s Claude models to the suite of tools it provides to U.S. intelligence and military agencies.
Palantir, the Peter Thiel-founded tech company named after a troublesome crystal ball, has been busy scooping up contracts with the Pentagon and striking deals with other AI developers to host their products on Palantir cloud environments that are certified to handle classified information.
Its dominance in the military and intelligence AI space—and association with President-Elect Donald Trump—has caused the company’s value to soar over the past year. In January, Palantir’s stock was trading at around $16 a share. The value had risen to more than $40 a share by the end of October and then received a major bump to around $55 after Trump won the presidential election this week.
In May, the company landed a $480 million deal to work on an AI-powered enemy identification and targeting system prototype called Maven Smart System for the U.S. Army.
In August, it announced it would be providing Microsoft’s large language models on the Palantir AI Platform to military and intelligence customers. Now Anthropic has joined the party.
“Our partnership with Anthropic and [Amazon Web Services] provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions,” Palantir chief technology officer Shyam Sankar said in a statement.
Palantir said that Pentagon agencies will be able to use the Claude 3 and 3.5 models for “processing vast amounts of complex data rapidly,” “streamlining document review and preparation,” and making “informed decisions in time-sensitive situations while preserving their decision-making authorities.”
What sorts of time-sensitive decisions those will be and how closely they will be connected to killing people is unclear. While all other federal agencies are required to publicly disclose how they use their various AI systems, the Department of Defense and intelligence agencies are exempt from those rules, which President-elect Donald Trump’s administration may scrap anyway.
In June, Anthropic announced that it was expanding government agencies’ access to its products and would be open to granting some of those agencies exemptions from its general usage policies. Those exemptions would “allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities.”
However, Anthropic said it wasn’t willing to waive rules prohibiting the use of its tools for disinformation campaigns, the design or use of weapons, censorship, or malicious cyber operations.
Read the full article here