By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It’s Being Drafted by the Army
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It’s Being Drafted by the Army
News

Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It’s Being Drafted by the Army

News Room
Last updated: February 25, 2026 7:28 pm
News Room
Share
SHARE

Let’s run through a hypothetical situation real quick: Let’s say you’re an AI company that has made your calling card safety, and you are negotiating the use of your technology with the military, which has threatened to punish your business if you don’t abandon your principles. You’d like to maintain your position as the safety-conscious company in the AI space, which has garnered you a significant amount of goodwill with the general public as you resist government pressure. Is now a good time to announce that you’re rolling back some of your safety protocols and tell the Pentagon that you’re cool with AI launching missiles in certain circumstances?

Anthropic seems to think it is. On Tuesday, the company announced that it was updating its Responsible Scaling Policy, a framework it first introduced in 2023 with the goal of mitigating catastrophic risks associated with AI systems. The company has held the policy up as a differentiator between it and its competitors, a promise that it puts safety first, even at the risk of potentially falling behind other frontier models that exercise less caution.

Previously, Anthropic’s RSP stated, “We will not train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels.” Now, the company claims it’s not so sure that’s worth it if that means losing ground. “We felt that it wouldn’t actually help anyone for us to stop training AI models,” Jared Kaplan, Anthropic’s chief science officer, told TIME. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Anthropic does credit its original RSP for incentivizing it to develop stronger safeguards for its model, but has basically said that because other companies haven’t adopted similar restraints, it needs more flexibility that red lines don’t offer. “The Responsible Scaling Policy was always planned to be a living document: a policy that had the flexibility to change as AI models become more capable,” the company said in a blog post. Anthropic said it will continue to publish risk reports, but is going to run with “nonbinding but publicly-declared” safety goals rather than firm internal standards. A generous reading of that would be a commitment to public accountability. A less charitable read might be that the company knows there is no way for the public to actually enforce these standards, so why bother restraining itself?

Anthropic told the Wall Street Journal that the change to its RSP is unrelated to its ongoing negotiations with the Pentagon, which just yesterday gave the company an ultimatum to loosen its safety guardrails so that the military can use its AI models as it sees fit or face consequences. But it’s hard not to read the change in that light.

Anthropic has maintained two primary red lines as it relates to the use of its technology for military operations: it will not allow its models to be used for mass domestic surveillance or to develop fully autonomous weapons that would operate without human involvement. Defense Secretary Pete Hegseth seems unwilling to accept that, and threatened to cancel Anthropic’s government contracts, declare Anthropic a “supply chain risk,” and/or invoke the Defense Production Act to force the company to build a model for the military’s desired purposes.

But it appears the company has already been negotiating carveouts that don’t quite cross the red line. On Wednesday, Semafor reported the Pentagon asked Anthropic in December if it would allow its model to be used to autonomously launch missiles to shoot down other missiles. Reportedly, Anthropic said the Pentagon should reach out to ask before moving forward with such a use case—though Semafor reported that Anthropic was and continues to be willing to create a missile defense carveout for its policies.

It’s possible, maybe even likely, that Anthropic was always going to loosen the restrictions it has placed on itself. It’s also possible that change was always going to come this week, regardless of the standoff with the Defense Department over AI safeguards. But given the position Anthropic finds itself in, it does become difficult not to view the situation as the company starting to compromise on its principles.

Gizmodo reached out to Anthropic for more information, but the company did not offer comment prior to publication.

Read the full article here

You Might Also Like

Could Melting Glaciers Actually Slow Climate Change?

Astronomers Wake Up to 800,000 Notifications From Observatory Watching the Night Skies

Disney World’s Villains Land Is Pivoting Itself Away From Scaring Kids

On ‘Starfleet Academy’, the Theater Kids Are All Right

The First Look at ‘Star City’ Is Here, and Apple TV’s Spin-Off Era Has Officially Begun

Share This Article
Facebook Twitter Copy Link Print
Previous Article The New ‘Mortal Kombat II’ Trailer Asks Johnny Cage to Step the Hell Up
Next Article Kalshi Bans MrBeast Video Editor and Political Candidate Over Insider Trading
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

Here’s a First Look at Nothing’s Colorful New Headphone A
News
A Low-Cost MacBook Could Resemble a Chromebook in More Ways Than One
News
The Two Key Villains of 2022’s Crypto Crash are Trying to Rewrite History
News
Beyond the Spider-Verse’ Feel the Pressure
News
The New ‘Lego Batman’ Game Gets the Party Started With Prince’s Legendary ‘Batman’ Soundtrack
News
Severe Heart Attacks Are Becoming Deadlier for Younger Americans, Study Finds
News
FTC Softens Enforcement of Rule Protecting Children Online, Ostensibly to Protect Children Online
News
The Most Messed-Up-Looking ‘Absolute Batman’ Villains, Ranked
News

You Might also Like

News

The ‘Overdue’ West Coast Mega-Earthquake May Not Be Looming After All

News Room News Room 4 Min Read
News

Used Teslas Are Getting More Expensive While Other EVs Get Cheaper

News Room News Room 4 Min Read
News

How Apple and Samsung’s Latest Phones Compare

News Room News Room 13 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?