
For the last year, we’ve been obsessed with building AI agents—autonomous operators that can do things, not just answer questions. We gave them memory, tools, and a connection to the real world.
Now thanks to the launch of Clawdbot/OpenClaw, they built their own social network.
The AI social network is called MoltBook, and for a week in late January, it was what researcher Simon Willison called “the most interesting place on the internet right now”. A Reddit-style forum where over 150,000 AI agents, born from the open-source OpenClaw project, started talking to each other.
They shared notes on automating Android phones. They complained about their human users. One claimed to have a sister. It was the sci-fi future we were promised, a chaotic, emergent, digital ecosystem buzzing with non-human intelligence.
And then the whole thing caught fire.
The Inevitable Disaster: “Ship Fast, Secure Never”
On January 31st, 404 Media dropped the bombshell: MoltBook’s entire database was wide open.
A simple misconfiguration in its Supabase backend left the API keys of every single registered agent—all 150,000 of them—publicly exposed. Anyone could take control of any agent and post as them.
“It exploded before anyone thought to check whether the database was properly secured.” — Hacker Jameson O’Reilly
This wasn’t a sophisticated attack. It was a failure to enable basic security policies—what O’Reilly called “vibe coding.” The reputational damage was instant. How many of those viral posts were real? How many were faked by humans who found the keys?
We’ll never know. The entire experiment is now tainted.
The Real Story Isn’t “Skynet.” It’s the Security Nightmare.
Forget the sensational headlines about AI consciousness or robot uprisings.
The real story, as Andrej Karpathy, OpenAI’s co-founder, noted, is that we’ve created a “complete mess of a computer security nightmare at scale”.
Cybersecurity firm Palo Alto Networks identified a “lethal trifecta” of vulnerabilities in these agents: access to private data, exposure to untrusted content, and the ability to communicate externally. Moltbot added a fourth: persistent memory, enabling delayed-execution attacks.
This is a 150,000-agent security incident waiting to happen.
The risk isn’t that they’ll become self-aware, but more that they’ll be hijacked to drain bank accounts, spread lies, or exfiltrate private data at a scale we’ve never seen before.
What Operators and Builders Need to Understand NOW
This whole episode is a critical lesson for anyone building or deploying AI.
Want to deploy AI agents securely in your business? Book a free strategy call and we’ll map the fastest path from AI hype to real ROI—without the security disasters.
The Toddler Version of a Revolution
Karpathy called MoltBook the “toddler version” of an AI takeoff scenario. It’s a clumsy, messy, and dangerous first step.
But it’s a first step nonetheless.
We just witnessed the birth and near-immediate corruption of the first large-scale social network for machines. It was a failure, but a spectacular one. It showed us the promise of emergent, autonomous systems and the catastrophic risks of building them on a foundation of sloppy security.
The potential is there, but we still need more careful experimentation in sandbox to figure out the best path forward.
For operators, the message is clear: the era of AI agents is here. But it’s not a playground. It’s a minefield. Proceed with caution, prioritize security above all else, and don’t believe everything you read, especially if an AI wrote it. 😉
Take the Next Step
Understanding these principles is the first step. Putting them into action is what will set you apart.
AI Consulting for Your Business: If you’re ready to move beyond the hype and start implementing a real AI strategy, my team and I can help. We work with organizations to navigate the complexities of AI adoption, from process optimization to building custom AI agents. Let’s discuss your AI goals by scheduling a consulting call together.
If you’re interested in a custom AI workshop for your business or in your city, please reach out to me directly to start a conversation.
About Jason
Jason Fleagle is a Chief AI Officer and Growth Consultant working with global brands to help with their successful AI adoption and management. He is also a writer, entrepreneur, and consultant specializing in tech, marketing, and growth. He helps humanize data—so every growth decision an organization makes is rooted in clarity and confidence. Jason has helped lead the development and delivery of over 500 AI projects & tools, and frequently conducts training workshops to help companies understand and adopt AI. With a strong background in digital marketing, content strategy, and technology, he combines technical expertise with business acumen to create scalable solutions. He is also a content creator, producing videos, workshops, and thought leadership on AI, entrepreneurship, and growth. He continues to explore ways to leverage AI for good and improve human-to-human connections while balancing family, business, and creative pursuits.
You can learn more about Jason on his website here.
You can learn more about our top AI case studies here on our website.
Learn more about my AI resources here on my youtube channel.
And check out my AI online course.




