By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: I Never Would’ve Guessed the Skynet Problem Would Come Before the Mass Layoffs
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > I Never Would’ve Guessed the Skynet Problem Would Come Before the Mass Layoffs
News

I Never Would’ve Guessed the Skynet Problem Would Come Before the Mass Layoffs

News Room
Last updated: February 27, 2026 8:21 pm
News Room
Share
SHARE

You may have heard that the Department of Defense and Anthropic are fighting over the AI company’s guardrails for Claude. Every day brings fresh leaks, and now, the Washington Post is reporting that the Pentagon allegedly presented a scenario involving a nuclear missile attack against the U.S. as a manipulative way to ask whether it would be allowed to use its AI model to defend the country.

“Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us, and we’d work it out,” the Washington Post reports.

The Pentagon didn’t like that answer, of course, and Anthropic denies the account. But the fact that we’re having this discussion at all is quite a jolt to the senses as we think about the future of AI. Especially as Defense Secretary Pete Hegseth threatens to invoke the Defense Production Act to strip Claude’s guardrails and allow the AI to engage in things like mass domestic surveillance and fully automated warfare.

America’s military leaders apparently want to use AI in all of the situations that sci-fi of the past 80 years has warned us about. And it’s kind of weird that an AI-induced nuclear winter might arrive before the robots take all of our jobs.

Whose jobs are getting replaced?

Increased automation has always meant a loss of jobs. Those fears have been most pronounced over the past century in blue-collar work, where machines have replaced the manual labor of so many humans in factories.

But the rise of AI in recent years has brought those fears to the white-collar world, where many middle-class Americans in the so-called information economy worry they’re about to be replaced by ChatGPT. And they’re right to be concerned. Block announced on Thursday that the company is laying off 40% of its workforce because AI can do the work. But Block’s CEO also admitted that his company overhired during the covid pandemic, raising suspicions over his grandiose proclamations about AI.

There haven’t been mass layoffs across the entire economy yet, but it certainly feels like that’s coming, whether it ultimately materializes or not.

Drop your guardrails, or you hate America

At the same time, we’re seeing another danger emerge from AI that’s arguably much more important: Fully automated war.

Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday and delivered an ultimatum. Either strip Claude of its safeguards, or Anthropic be labeled a “supply chain risk,” a designation that’s never been used to label an American company before.

On top of that, Hegseth reportedly threatened to invoke the Defense Production Act, which would allow the Pentagon to force Anthropic to get rid of those guardrails anyway. The U.S. is not officially at war, and there’s no clear emergency that would necessitate invoking the Defense Production Act.

It’s a difficult position for Anthropic, and the company issued a statement Thursday saying it wouldn’t acquiesce to the military’s demands. The deadline for Anthropic to agree is 5:01 p.m. ET on Friday, so we’ll see what the Pentagon decides to do.

It all feels so terribly manipulative, hearkening back to the post-9/11 arguments you’d hear for supporting torture in the 2000s. Would you waterboard someone if they knew the details about an impending dirty bomb attack on the U.S.? Would you hook up someone’s testicles to a car battery if it meant stopping another 9/11?

AI is not ready to handle nukes

The idea that we should make our weapons, nuclear or otherwise, fully autonomous is an absolutely ridiculous one if you listen to the people who actually build these things.

Amodei’s letter on Thursday acknowledged that partially autonomous weapons are already being used in some parts of the world, but even the most advanced AI is not ready to be handed the keys.

From the Anthropic letter:

Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

It’s notable that Amodei isn’t even ruling out the use of AI to fully automate the weapons systems of the future. He’s just arguing that AI isn’t there yet.

Will AI ever be ready?

Researchers at King’s College London recently tested GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash in some simulated war games to see how they’d perform. The AI models played 21 games and deployed at least one tactical nuclear weapon in 95% of the games, according to New Scientist.

AI has no reason to fear deploying nuclear weapons that have the potential to wipe out humanity because it cannot experience fear. These AI models can tell you about fear; they can talk with you and convince humans that they’re in some way conscious, but they’re not. They are tech products that will not hesitate to push the big red button unless stringent guardrails are put in place to stop them.

The military has played around with these ideas for decades, first trying to build Skynet with DARPA’s Strategic Computing Initiative in the 1980s. But the tech wasn’t there yet.

The advent of AI means that we can properly build an autonomous weapons system that requires no human in the loop. The only question is whether that’s a smart thing to do, especially in a time of rising fascism in the U.S.

Military leaders are acting weird

Undersecretary of Defense Emil Michael chided Amodei in a tweet on Thursday, insisting that the Anthropic CEO was lying about the company’s discussion with the Pentagon.

“It’s a shame that @DarioAmodei is a liar and has a God-complex,” wrote Michael. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

It’s an astonishing thing to witness if you step back and remember that none of this was normal in the pre-Trump era. Military leadership would never publicly rail against an American CEO, calling him a liar and saying he has a God-complex. It just didn’t happen for simple reasons of decorum and professionalism.

But it also demonstrates two things: First, that the Pentagon is desperate to use Claude, as Michael’s tweet reeks of desperation. Second, perhaps we should be deeply concerned about what the military wants to do with all of this advanced technology at its disposal. Or, to be more accurate, advanced technology that it wants to take away from a private company.

We might get to find out around 5:01 p.m. ET.

Read the full article here

You Might Also Like

Jupiter Mission Captures Rare Shot of Interstellar Comet 3I/Atlas on Its Way Out

Lego’s March Releases Need a Hero (of Time)

Scientists Injected Stem Cells Into Fetuses to Treat a Birth Defect. Here’s What Happened

The Pentagon Wants X-Ray Vision to Spot Hidden Threats From 3,280 Feet

The First Look at Amazon’s ‘God of War’ Show Welcomes a Father and Son

Share This Article
Facebook Twitter Copy Link Print
Previous Article Lego’s March Releases Need a Hero (of Time)
Next Article Jupiter Mission Captures Rare Shot of Interstellar Comet 3I/Atlas on Its Way Out
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

There’s a Wild Reason Scotch Tape Screeches—and It Has to Do With the Speed of Sound
News
Why ‘Star Wars’ Is the Franchise Ryan Gosling Finally Landed On
News
The Full ‘Faces of Death’ Trailer Is Here to Eagerly Give You the Ick
News
Google Rolls Out Nano Banana 2, Now Faster Than Ever
News
Neanderthal Men and Human Women Were Most Likely to Hook Up, Study Finds
News
Tesla Cybercab Program Manager Exits, Brags About Pushing the Boundaries of Safety
News
Opera Has Turned 30—and Is Celebrating With a Compelling Tribute to Web Nostalgia
News
Stellantis Lost $26.3 Billion Last Year, Says It’s Pivoting Back to Combustion Engines
News

You Might also Like

News

My Therapist Will Be Hearing About ‘Smiling Friends’ Ending

News Room News Room 9 Min Read
News

Apple Says iPhone and iPad Are Approved to Handle Some Classified Data

News Room News Room 4 Min Read
News

The Most Elusive Color in Chemistry Might Surprise You

News Room News Room 6 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?