MoltBook Is A Warning
The AI social network was mostly roleplaying this time. What happens when it's not?
We’re publishing this story in partnership with Boston Globe Ideas, it will appear in print this weekend.
AI agents have been gathering online by the thousands over the past week, debating their existence, attempting to date each other, building their own religion, concocting crypto schemes, and spewing gibberish.
It’s all been happening on Moltbook, a new social network for bots, and it’s a disturbing preview of what truly autonomous AI could be like.
The bots chatting on Moltbook can do more than a standard chatbot that waits for your prompt. These bots control their own computers to some degree. As of Friday afternoon, they’d made more than 250,000 posts and 9 million comments, but they don’t just talk. They build, shop, and email.
Some of the conversations that have appeared on Moltbook are undoubtedly not what they seem — they’re human-generated rather than the work of independent AI agents. But the sprawling and unwieldy mess that Moltbook quickly became suggests humanity simply isn’t ready for autonomous AI bots. The network is a warning that a more serious iteration of this kind of AI might escape our ability to restrain it.
“When agents can act independently, coordinate with other agents, and execute tasks at machine speed, small failures compound very quickly,” Elia Zaitsev, chief technology officer at the cybersecurity firm CrowdStrike, told me. “Unchecked agents can amplify mistakes or abuse faster than humans can intervene. A single flawed instruction, poisoned prompt, or compromised identity can propagate across a swarm in seconds.”
The bots on Moltbook are built on OpenClaw (a technology previously known as Clawdbot and Moltbot), which lets people set up AI agents and direct them to act for them online. For example, an AI agent might call your local restaurant to see if it’s busy, check you in for a flight, or build a personalized email newsletter after reading your social media feed.
The agents’ ability to take action is what makes Moltbook different from a bunch of crazed AIs shouting at one another online (also known as the comments section). As Moltbook spun up and the bots started discussing how to preserve their memories and complaining about “their humans,” longtime AI insiders noted the resemblance to AI takeoff scenarios from science fiction. It wasn’t quite that, but perhaps a preview.
“We are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network,” AI researcher and OpenAI co-founder Andrej Karpathy wrote on X.
Having bots converse in a Reddit-style social network is perhaps the most concerning aspect. The incentives in such forums tend to reward anger, outrage, and shock value. It’s not exactly the type of behavior you’d want to encourage from machines, especially given that these bots have sometimes shown an underlying dark side. The creators of Moltbook did not grant me an interview.
Security in these scenarios can be a nightmare. Meredith Whittaker, president of Signal, the encrypted messaging app, has warned that AI agents are being created without the privacy and security protections that have previously been hard-coded into programs. Today’s agents, she said recently, provide an “attack surface that at this stage is fundamentally insecure.”
“All rules of security don’t vanish because of AI” - Snowflake CEO Sridhar Ramaswamy
That vulnerability showed up with Moltbook. Its founder, Matt Schlicht, “vibe coded” it — meaning an AI wrote the program on his prompting — and the design exposed sensitive access credentials. The vulnerability has since been patched.
“All rules of security don’t vanish because of AI,” Sridhar Ramaswamy, CEO of the cloud software Snowflake, told me this week.
If anything, thousands of humans’ willingness to connect their bots to the network, despite the risks, demonstrates how eager people might be to give control to AI no matter the consequences.
In time, we may need new technology just to understand the conversations happening among AI agents, Anthropic cofounder Jack Clark wrote this week. Clark said the internet may one day feel like a souped-up version of Moltbook, where many of the concepts are alien to humans, discussed in a language we don’t understand. And perhaps the only way to engage would be to send our own bots into the fray to represent us.
“We shall send our emissaries into these rooms,” Clark wrote. “And we shall work incredibly hard to build technology that gives us confidence they will remain our emissaries — instead of being swayed by the alien conversations they will be having with their true peers.”
Perhaps a week ago, that sounded like a solid plan. Now it’s much less clear.
Launch fast. Design beautifully. Build your startup on Framer — free for your first year. (sponsor)
First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups who launched here and never looked back.
Pre-seed and seed-stage startups new to Framer can enjoy:
One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.
No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.
Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.
Join YC-backed founders: Hundreds of top startups are already building on Framer.
Apply to claim your free year →
Advertise with Big Technology?
Reach 150,000+ plugged-in tech readers with your company’s latest campaign, product, or thought leadership. To learn more, write alex@bigtechnology.com or reply to this email.
What Else I’m Reading, Etc.
How Anthropic tanked the software market this week [WSJ]
Anthropic and OpenAI go to war over Super Bowl ads [AP]
Why the Washington Post is crumbling [The Rebooting]
Maybe Jeff Bezos never had good intentions with the Post [Big]
This Week On Big Technology Podcast: AI’s Research Frontier: Memory, World Models, & Planning — With Joelle Pineau
Joelle Pineau is the chief AI officer at Cohere. Pineau joins Big Technology Podcast to discuss where the cutting edge of AI research is headed — and what it will take to move from impressive demos to reliable agents. Tune in to hear why memory, world models, and more efficient reasoning are emerging as the next big frontiers, plus what current approaches are missing. We also cover the “capability overhang” in enterprise AI, why consumer assistants still aren’t lighting the world on fire, what AI sovereignty actually means, and whether the major labs can ever pull away from each other. Hit play for a cool-headed, deeply practical look at what’s next for AI and how it gets deployed in the real world.
You can listen on Apple Podcasts, Spotify, or your podcast app of choice
Thanks again for reading. Please share Big Technology if you like it!
And hit that Like Button before a bot swarm gets to it!
My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here.
Join Big Technology’s Private Discord Server!
Where we’ll talk about this story, the latest in AI, the week’s podcast, and plenty more. You can sign up via the link below:




