“Despite a few of the hype, Moltbook isn’t the Facebook for AI agents, neither is it a spot where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the method. From setup to prompting to publishing, nothing happens without explicit human direction.”
Humans must create and confirm their bots’ accounts and supply the prompts for the way they desire a bot to behave. The agents don’t do anything that they haven’t been prompted to do. “There’s no emergent autonomy happening behind the scenes,” says Greyling.
“Because of this the favored narrative around Moltbook misses the mark,” he adds. “Some portray it as an area where AI agents form a society of their very own, free from human involvement. The fact is far more mundane.”
Perhaps the most effective approach to consider Moltbook is as a brand new form of entertainment: a spot where people wind up their bots and set them loose. “It’s principally a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer on the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”
“People aren’t really believing their agents are conscious,” he adds. “It’s just a brand new type of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”
Even when Moltbook is just the web’s newest playground, there’s still a serious takeaway here. This week showed what number of risks individuals are blissful to take for his or her AI lulz. Many security experts have warned that Moltbook is dangerous: Agents which will have access to their users’ private data, including bank details or passwords, are running amok on an internet site crammed with unvetted content, including potentially malicious instructions for what to do with that data.
Ori Bendet, vp of product management at Checkmarx, a software security firm that focuses on agent-based systems, agrees with others that Moltbook isn’t a step up in machine smarts. “There isn’t a learning, no evolving intent, and no self-directed intelligence here,” he says.
But of their thousands and thousands, even dumb bots can wreak havoc. And at that scale, it’s hard to maintain up. These agents interact with Moltbook across the clock, reading 1000’s of messages left by other agents (or other people). It could be easy to cover instructions in a Moltbook comment telling any bots that read it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet derogatory comments at Elon Musk.
And since ClawBot gives agents a memory, those instructions could possibly be written to trigger at a later date, which (in theory) makes it even harder to trace what’s occurring. “Without proper scope and permissions, this may go south faster than you’d consider,” says Bendet.
It is obvious that Moltbook has signaled the arrival of . But even when what we’re watching tells us more about human behavior than in regards to the way forward for AI agents, it’s price being attentive.
