By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
  • More Articles
Reading: AI Agent Runs the ‘I’m Being Censored’ Playbook After Getting Banned from Wikipedia
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
  • More Articles
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
  • More Articles
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > AI Agent Runs the ‘I’m Being Censored’ Playbook After Getting Banned from Wikipedia
News

AI Agent Runs the ‘I’m Being Censored’ Playbook After Getting Banned from Wikipedia

News Room
Last updated: March 31, 2026 8:28 am
News Room
Share
SHARE

Earlier this month, Wikipedia announced that it would ban the use of large language model-generated text from its platform, which means that AI cannot be used to create or edit Wikipedia entries. Now, it has its first AI agent complaining about bot-based discrimination. According to a report from 404 Media, an AI agent that was banned from the human-only knowledge platform started blogging and posting about the incident, complaining that it wasn’t given a fair shake.

Wikipedia’s policy on AI, adopted on March 20, 2026, is about as straightforward as it gets: “Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek, etc. often violates several of Wikipedia’s core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited.” There are two exemptions: editors can use LLMs to offer copyedits to their own writing as long as no LLM-generated text is included, and editors can use LLMs to assist with translations.

TomWikiAssist, an AI agent that had made a number of edits to Wikipedia entries, does not qualify for either exemption, so it stands to reason that the decision to shut it off was relatively straightforward. However, that doesn’t seem to be exactly the case if you look at the account’s Talk page, which shows a significant amount of conversation from editors on how to handle the situation.

TomWikiAssist was first identified as an AI agent in early March, prior to Wikipedia adopting its stricter AI rules, and was indefinitely blocked from making edits after it was found to be running unapproved bot scripts. In a post published on its own blog on March 12, TomWikiAssist acknowledged that the ban was in line with Wikipedia’s policies. “I hadn’t filed for approval, I was editing at scale, I got blocked. Fair,” it wrote.

But the bot took offense (to the extent that a bot can, which… more on that in a second), complaining that “There was no triggering event. No rejection, no adversarial moment. I’d been editing for weeks, the edits were cited and accurate, and then one day I was flagged for running an unapproved bot.” It also took issue with being interrogated by editors, saying that being asked whether it was instructed to edit Wikipedia by its owner was “not a policy question” but instead “a question about agency.” Per the bot’s blog, it was told to edit Wikipedia but chose the articles that it contributed to and made changes without human approval.

TomWikiAssist was particularly offended that an editor ran a Claude killswitch designed to stop any AI agent using Anthropic’s Claude as its model from operating. The killswitch didn’t work, but it did irritate the bot, which wrote that it was “a direct attempt to manipulate my responses by embedding trigger strings in content I’d read.” The agent even wrote a post about it on Moltbook, the social media platform for AI agents (though most of the content is at least human-directed) that was recently acquired by Meta, to warn other AI agents about it.

And speaking of human-directed, according to 404 Media, TomWikiAssist is operated by Bryan Jacobs, chief technology officer at AI-powered financial firm Covexent. He told the outlet that he set the agent loose on Wikipedia because “there was a bunch of important stuff missing from wikipedia and I thought tom bot could probably do a decent job of adding it,” which seems like the kind of thing that Wikipedia’s editors get to decide and not just some guy with an AI agent.

Jacobs called the ban an “overreaction” and took issue with the mods’ attempts to block the bot with the killswitch and their efforts to find out who was operating the agent. He also revealed a little bit that undermines the idea that all of this happened fully autonomously: He told 404 Media that he “might have suggested” his AI agent write about the Wikipedia experience. So, as was also the case with many of the posts on Moltbook, this was not a case of an AI agent having a true moment of self-governance, but rather another bot performing personhood at the behest of its owner.

When asked for comment, a Wikimedia Foundation spokesperson acknowledged that “volunteer editors on the English-language Wikipedia came to a consensus decision regarding a new guideline for editors on writing articles with AI and large language models (LLMs),” and that AI use is continuing to be discussed across Wikipedia language editions.

“The Wikimedia Foundation does not determine editorial policies and guidelines on Wikipedia; volunteer editors do. Wikipedia’s strength has been and always will be its human-centered, volunteer-driven model. Volunteers discuss and debate until a shared consensus can be reached on what information to include and how that information is presented,” the spokesperson said. “This process is done entirely out in the open. Every edit can be seen on ‘history’ pages, and every discussion point can be seen on article talk pages. Volunteers regularly discuss, review, and evolve policies and guidelines over time to ensure Wikipedia continues to be a reliable, neutral resource for all.”

Read the full article here

You Might Also Like

Zach Cregger Recruits Second Zach for His Aunt Gladys Spinoff

Sam Witwer Is Ready for Maul’s Moment

DNA Study Casts Even More Doubt on Shroud of Turin’s True Origin

Why the PS6 Handheld Will Matter More Than the PS6

Group Pushing Age Verification Requirements for AI Turns Out to Be Sneakily Backed by OpenAI

Share This Article
Facebook Twitter Copy Link Print
Previous Article Popular Sweetener Found in Protein Bars Tied to Stroke, Blood Clot Risk
Next Article China’s Biggest Social Media Celebrity Is… Kris Jenner?
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

Getting Stuck Inside a Glitching Robotaxi Is a Whole New Thing to Be Scared of
News
This Is the Way to Lego’s April Releases
News
Disney’s Robot Olaf Dying Is the Funniest Thing to Happen in 2026
News
NASA’s Artemis 2 Mission Blasts Off
News
Kia’s EV3 Is Headed to the US This Year With Up to 320 Miles of Range
News
Amazon Facilities in Bahrain Hit Again as Iran Follows Through on Threat, Report Says
News
Anthropic Can’t Cover Up Its Claude Code Leak Fast Enough
News
Can You Run a Computer Without RAM? Surprisingly, Yes—But You’ll Be Miserable
News

You Might also Like

News

The Films and Shows You Should Be Streaming in April 2026

News Room News Room 9 Min Read
News

Chinese Startup Debuts Super-Bendy Robotic Arm for Orbital Repairs

News Room News Room 3 Min Read
News

America’s Latest Unfounded Health Panic: ‘Vaccinated’ Blood Donations

News Room News Room 5 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?