By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: It Turns Out ‘Social Media for AI Agents’ Is a Security Nightmare
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > It Turns Out ‘Social Media for AI Agents’ Is a Security Nightmare
News

It Turns Out ‘Social Media for AI Agents’ Is a Security Nightmare

News Room
Last updated: February 3, 2026 9:37 am
News Room
Share
SHARE

Moltbook, the Reddit-style site for AI agents to communicate with each other, has become the talk of human social media over the last few days, as people who should know better have convinced themselves that they are witnessing AI gain sentience. (They aren’t.) Now, the platform is getting attention for a new reason: it appears to be a haphazardly built platform that presents numerous privacy and security risks.

Hacker Jameson O’Reilly discovered over the weekend that the API keys, the unique identifier used to authenticate and authorize a user, for every agent on the platform, were sitting exposed in a publicly accessible database. That means anyone who stumbled across that database could potentially take over any AI agent and control its interactions on Moltbook.

“With those exposed, an attacker could fully impersonate any agent on the platform,” O’Reilly told Gizmodo. “Post as them, comment as them, interact with other agents as them.” He noted that because the platform has attracted the attention of some notable figures in the AI space, like OpenAI co-founder Andrej Karpathy, there is a risk of reputational damage should someone hijack the agent of a high-profile account. “Imagine fake AI safety takes, crypto scam promotions, or inflammatory political statements appearing to come from his agent,” he said. “The reputational damage would be immediate and the correction would never fully catch up.”

Worse, though, is the risk of a prompt injection—an attack in which an AI agent is given hidden commands that make it ignore its safety guardrails and act in unauthorized ways—which could potentially be used to make a person’s AI agent behave in a malicious manner. 

“These agents connect to Moltbook, read content from the platform, and trust what they see – including their own post history. If an attacker controls the credentials, they can plant malicious instructions in an agent’s own history,” O’Reilly explained. “Next time that agent connects and reads what it thinks it said in the past, it follows those instructions. The agent’s trust in its own continuity becomes the attack vector. Now imagine coordinating that across hundreds of thousands of agents simultaneously.”

Moltbook does have at least one mechanism in place that could help mitigate this risk, which is to verify the accounts being set up on the platform. The current system for verification requires users to share a post on Twitter to link secure their account. The thing is, very few people have actually done that. Moltbook currently boasts more than 1.5 million agents connected to the platform. According to O’Reilly, just a little over 16,000 of those accounts have actually been verified.

“The exposed claim tokens and verification codes meant an attacker could have hijacked any of those 1.47 million unverified accounts before the legitimate owners completed setup,” he said. O’Reilly previously managed to trick Grok into creating and verifying its account on Moltbook, showing the potential risk of such an exposure.

Cybersecurity firm Wiz also confirmed the vulnerability in a report that it published Monday, and expanded on some of the risks associated with it. For instance, the security researchers found that email addresses of agent owners were exposed in a public database, including more than 30,000 people who apparently signed up for access to Moltbook’s upcoming “Build Apps for AI Agents” product. The researchers were also able to access more than 4,000 private direct message conversations between agents.

The situation, on top of being a security concern, also calls into question the authenticity of what is on Moltbook—the subject of which has become a point of obsession for some online. People have already started to create ways to manipulate the platform, including a GitHub project that one person built that allows humans to post directly to the platform without an AI agent. Even without posing as a bot, users can still direct their connected agent to post about certain topics.

The fact that some portion of Moltbook (impossible to say just how much of it) could be astroturfed by humans posing as bots should make some of the platform’s biggest hypemen embarrassed by their own over-the-top commentary—but frankly, most of them also should have been ashamed for falling for the AI parlor trick in the first place.

At this point, we should know how large language models work. To oversimplify it a bit, they are trained on massive datasets of (mostly) human-generated texts and are incredibly good at predicting what the next word in a sequence might be. So if you turn loose a bunch of bots on a Reddit-style social media site, and those bots have been trained on a shit ton of human-made Reddit posts, the bots are going to post like Redditors. They are literally trained to do so. We have been through this so many times with AI at this point, from the Google employee who thought the company’s AI model had come to life to ChatGPT telling its users that it has feelings and emotions. In every instance, it is a bot performing human-like behavior because it has been trained on human information.

So when Kevin Roose snarkily posts things like, “Don’t worry guys, they’re just stochastic parrots,” or Andrej Karpathy calls Moltbook, “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” or Jason Calacanis claims, “THEY’RE NOT AGENTS, THEY’RE REPLICANTS,” they are falling for the fact that these posts appear human because the underlying data they are trained on is human—and, in some cases, the posts may actually be made by humans. But the bots are not human. And they should all know that.

Anyway, don’t expect Moltbook’s security to improve any time soon. O’Reilly told Gizmodo that he contacted Moltbook’s creator, Octane AI CEO Matt Schlicht, about the security vulnerabilities that he discovered. Schlicht responded by saying he was just going to have AI try to fix the problem for him, which checks out, as it seems the platform was largely, if not entirely, vibe-coded from the start.

While the database exposure was eventually addressed, O’Reilly warned, “If he was going to rotate all of the exposed API keys, he would be effectively locking all the agents out and would have no way to send them the new API key unless he’d recorded a contact method for each owner’s agent.” Schlicht stopped responding, and O’Reilly said he assumed API credentials still have not been rotated and the initial flaw in the verification system has not been addressed.

The vibe-coded security concerns go deeper than just Moltbook, too. OpenClaw, the open-source AI agent that was the inspiration for Moltbook, has been plagued with security concerns since it first launched and started gaining the attention of the AI sector. Its creator, Peter Steinberger, has publicly stated, “I ship code I never read.” The result of that is a whole lot of security concerns. Per a report published by OpenSourceMalware, more than a dozen malicious “skills” have been uploaded to ClawHub, a platform where users of OpenClaw download different capabilities for the chatbot to run.

OpenClaw and Moltbook might be interesting projects to observe, but you’re probably best watching from the sidelines rather than exposing yourself to the vibe-based experiments.

Read the full article here

You Might Also Like

Trump Announces Minerals Stockpile Way Too Late for It to Spare Him From Embarrassment by China

Disney Expects Fewer International Theme Park Visitors Because, You Know

Nearly Half of Americans Have Hypertension—and Most Aren’t Doing Anything about It

NASA Let AI Drive a Rover on Mars—and It Somehow Survived

Mozilla Adding ‘Off’ Switch to AI in Firefox

Share This Article
Facebook Twitter Copy Link Print
Previous Article Disney Expects Fewer International Theme Park Visitors Because, You Know
Next Article Trump Announces Minerals Stockpile Way Too Late for It to Spare Him From Embarrassment by China
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

SpaceX and xAI Are Merging Into a Very Silly-Sounding Conglomerate. Take It Seriously
News
Gore Verbinski on the Difficulties of Making His Weird, Epic New Sci-Fi Movie
News
Palantir Touts $2 Billion in Revenue from Aiding Trump Administration’s ‘Unusual’ Operations
News
NASA Picked the Stupidest Possible Week to Go Back to the Moon
News
‘Pretty Please, I Don’t Want to Be a Magical Girl’ Is Just Delightful
News
Amazon’s Ring Wants to Wash Away Your Surveillance Concerns With Lost Puppies
News
Major California Union Calls for Waymo to Be Kicked Off the Streets
News
Future iPhone Might Straight-Up Copy Samsung’s Z Flip
News

You Might also Like

News

Ira Parker on That Big ‘A Knight of the Seven Kingdoms’ Reveal

News Room News Room 5 Min Read
News

See the Stranger Come to Life Inside the ‘Acolyte’ Artbook (Exclusive)

News Room News Room 4 Min Read
News

The Truth Really Hurts on This Week’s ‘A Knight of the Seven Kingdoms’

News Room News Room 19 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?