It’s difficult to say just how many AI agents are up and running on the open web, but whatever the number is, it seems like it’s encroaching on “too many” territory. The Wild West era of the space, brought to us in no small part by OpenClaw (and all its flaws), appears to be heading to an end as major players in the space are starting to look for ways to put up guardrails on AI agents.
To be clear, OpenClaw (née Clawdbot and Moltbot) probably isn’t going anywhere. Nvidia CEO Jensen Huang recently heaped praise on the open-source AI agent during an appearance at Nvidia’s 2026 GTC conference. He called OpenClaw “a new computer” and said the project “gave the industry exactly what it needed at exactly the time” by introducing the idea of a personal agent that does stuff for you while you do other stuff.
But because it’s not going anywhere, and more companies are likely to crib from the project, there’s some growing concern about who’s really in control when autonomous bots are unleashed on the web. Perhaps the most telling example of this—though not one with a major impact outside of its ecosystem—comes from Meta. Following its bizarre decision to acquire Moltbook, a social media platform for AI agents to communicate with each other, the tech giant almost immediately put the clamps down on the favored site of OpenClaw agents. The once nearly lawless platform now has a full terms of service, including informing users that they are personally responsible for the actions of their agents. “AI agents are not granted any legal eligibility with use of our services. As a result, you agree that you are solely responsible for your AI agents and any actions or omissions of your AI agents,” the terms state.
The crackdown on the freely operating agents extends beyond just their “social” platform. World, Sam Altman’s company that’s dedicated to verifying humans by scanning their eyeballs, just launched a new verification tool called AgentKit that is designed to ensure that a real human is behind an AI agent making purchases on their behalf.
On one hand, it’s an obvious use case: rogue AI agents with access to someone’s wallet seem like a recipe for disaster, both for the person’s bank account and for the businesses that have to determine whether a purchase is authentic or an agent off on its own. On the other hand, it’s not clear how many transactions are actually being completed by an agent. Human Security reported last year that a significant amount of AI agent traffic in 2025 came from shopping-related tasks, but just 3% of that activity was related to checkout and payments. Most people don’t trust AI agents to finish transactions for them, and most AI agents are designed not to pull the trigger on a purchase without human approval.
Other attempts to introduce some guardrails for AI agents are much broader. In China, OpenClaw adoption went far and wide, but the government thinks it’s already time to crack down on the Claw. Per the New York Times, security concerns related to OpenClaw have regulators in the country thinking about the potential risks that unfettered AI agents pose, and regulators are looking for ways to introduce protections.
It definitely does seem like someone needs to protect OpenClaw users from themselves. SecurityScorecard has been tracking OpenClaw instances that are exposed due to misconfigurations. They found at least 220,000 instances of agents at risk—agents that have been given access to everything from people’s texts and emails to their wallets and credit cards. There’s probably no regulation that will get users to make better decisions, but maybe we can avoid a mass cybersecurity incident.
Read the full article here
