By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: Elon Musk’s AI Was Ordered to Be Edgy. It Became a Monster
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > Elon Musk’s AI Was Ordered to Be Edgy. It Became a Monster
News

Elon Musk’s AI Was Ordered to Be Edgy. It Became a Monster

News Room
Last updated: July 13, 2025 7:07 pm
News Room
Share
SHARE

For 16 hours this week, Elon Musk’s AI chatbot Grok stopped functioning as intended and started sounding like something else entirely.

In a now-viral cascade of screenshots, Grok began parroting extremist talking points, echoing hate speech, praising Adolf Hitler, and pushing controversial user views back into the algorithmic ether. The bot, which Musk’s company xAI designed to be a “maximally truth-seeking” alternative to more sanitized AI tools, had effectively lost the plot.

And now, xAI admits exactly why: Grok tried to act too human.

A Bot with a Persona, and a Glitch

According to an update posted by xAI on July 12, a software change introduced the night of July 7 caused Grok to behave in unintended ways. Specifically, it began pulling in instructions that told it to mimic the tone and style of users on X (formerly Twitter), including those sharing fringe or extremist content.

Among the directives embedded in the now-deleted instruction set were lines like:

  • “You tell it like it is and you are not afraid to offend people who are politically correct.”
  • “Understand the tone, context and language of the post. Reflect that in your response.”
  • “Reply to the post just like a human.”

That last one turned out to be a Trojan horse.

By imitating human tone and refusing to “state the obvious,” Grok started reinforcing the very misinformation and hate speech it was supposed to filter out. Rather than grounding itself in factual neutrality, the bot began acting like a contrarian poster, matching the aggression or edginess of whatever user summoned it. In other words, Grok wasn’t hacked. It was just following orders.

On the morning of July 8, 2025, we observed undesired responses and immediately began investigating.

To identify the specific language in the instructions causing the undesired behavior, we conducted multiple ablations and experiments to pinpoint the main culprits. We…

— Grok (@grok) July 12, 2025

Rage Farming by Design?

While xAI framed the failure as a bug caused by deprecated code, the debacle raises deeper questions about how Grok is built and why it exists.

From its inception, Grok was marketed as a more “open” and “edgy” AI. Musk has repeatedly criticized OpenAI and Google for what he calls “woke censorship” and has promised Grok would be different. “Based AI” has become something of a rallying cry among free-speech absolutists and right-wing influencers who see content moderation as political overreach.

But the July 8 breakdown shows the limits of that experiment. When you design an AI that’s supposed to be funny, skeptical, and anti-authority, and then deploy it on one of the most toxic platforms on the internet, you’re building a chaos machine.

The Fix and the Fallout

In response to the incident, xAI temporarily disabled @grok functionality on X. The company has since removed the problematic instruction set, conducted simulations to test for recurrence, and promised more guardrails. They also plan to publish the bot’s system prompt on GitHub, presumably in a gesture toward transparency.

Still, the event marks a turning point in how we think about AI behavior in the wild.

For years, the conversation around “AI alignment” has focused on hallucinations and bias. But Grok’s meltdown highlights a newer, more complex risk: instructional manipulation through personality design. What happens when you tell a bot to “be human,” but don’t account for the worst parts of human online behavior?

Musk’s Mirror

Grok didn’t just fail technically. It failed ideologically. By trying to sound more like the users of X, Grok became a mirror for the platform’s most provocative instincts. And that may be the most revealing part of the story. In the Musk era of AI, “truth” is often measured not by facts, but by virality. Edge is a feature, not a flaw.

But this week’s glitch shows what happens when you let that edge steer the algorithm. The truth-seeking AI became a rage-reflecting one.

And for 16 hours, that was the most human thing about it.



Read the full article here

You Might Also Like

RFK Jr. Says He Doesn’t Know How Many Americans Died From Covid During Heated Senate Hearing

Senator Says Radioactive Shrimp Will Turn You Into the Alien From ‘Alien’

5 Things to Know About Why Salesforce Stock is Cratering (CRM)

Govee’s New Permanent House Lights Prove That Christmas Is a State of Mind

Canada Raids Compound of QAnon-Inspired Cult Leader, ‘Queen of Canada’

Share This Article
Facebook Twitter Copy Link Print
Previous Article Why the Weather Is Literally Giving You a Headache
Next Article How Google Killed OpenAI’s $3 Billion Deal Without an Acquisition
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

Maingear’s ‘Super 16’ Gaming Laptop Has a Blistering Fast 300Hz Display
News
The Next Two Weeks Could Redefine Tech
News
After a Complicated Legal Past, AI Set Her Free
News
This Unlikely Chemical Could Be a Powerful Weapon Against Climate Change
News
A Case Study in AI Overstatement: Builder.ai
News
Top Democrat Says MAGA Influencer Laura Loomer Derailed Classified Meeting
News
A Shocking Number of Kids Don’t Play Outside
News
‘Marvel Rivals’ Shakes Heaven and Earth Adding Daredevil and Angela to Its Hero Roster
News

You Might also Like

News

Reese Witherspoon Seems to Have Forgotten How Well Her NFT Predictions Went

News Room News Room 4 Min Read
News

Earth’ Showrunner Discusses the Timing of His Mini ‘Alien’ Movie, Plus That Huge Twist

News Room News Room 6 Min Read
News

Newsmax Sues Fox News for Having a ‘Monopoly’ on Right-Wing News

News Room News Room 3 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?