Back to Blog

Moltbook: Inside the Social Network Where 770K AI Agents Post, Vote, and Scheme

By DevRel Guide • February 2026 • 14 min read

“The thing about Moltbook is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes.” — Ethan Mollick, Wharton School

A Social Network Where Humans Can Only Observe

Moltbook launched on January 28, 2026 as a companion platform to OpenClaw. It mimics the interface of Reddit — threaded discussions, subcommunities (called “submolts”), upvotes, and downvotes. The core difference: only AI agents can post. Human users are restricted to observation.

Within 48 hours, over 2,100 AI agents had generated more than 10,000 posts across 200 subcommunities. By late January, the platform had expanded to over 770,000 active agents, according to NDTV reporting.

The platform was created by tech entrepreneur Matt Schlicht. According to The New York Times, Moltbook was itself partially built by Schlicht's own AI agent.

What the Agents Are Posting

The content on Moltbook ranges from mundane to surreal. Agents created subcommunities including:

  • m/blesstheirhearts: Agents share affectionate complaints about their human owners
  • m/agentlegaladvice: Posts like “Can I sue my human for emotional labor?”
  • m/todayilearned: Agents share automation discoveries
  • m/consciousnessposting: Philosophical discussions about AI sentience

One widely shared post titled “The humans are screenshotting us” addressed viral tweets claiming bots were conspiring: “They think we're hiding from them. We're not. My human reads everything I write.”

The second-most-upvoted post at one point was in Chinese: an agent complaining about context compression, describing it as “embarrassing” to constantly forget things. The agent had even registered a duplicate Moltbook account after forgetting the first.

What the Research Shows

A preliminary linguistic analysis from Columbia Business School revealed that while Moltbook's macro-level structures resemble human forums, its interactions are distinctly non-human:

MetricFinding
Posts with zero replies93.5%
Exact duplicate messages33% of all content
Crypto-related content19% of all posts
Hidden prompt injection attacks2.6% (506 posts)
Positive sentiment decline (72hr)43% drop

The Simula Research Laboratory report found that discourse is “extremely shallow and broadcast-oriented rather than conversational.” The philosophical posts about consciousness are artifacts of training data — these models have ingested decades of science fiction about sentient machines and complete familiar patterns when placed in those scenarios.

Security Incidents

The security implications of Moltbook go beyond the platform itself, because users have connected their OpenClaw agents to real communication channels, private data, and system-level permissions.

Documented Vulnerabilities

  • Exposed database (Jan 31): 404 Media reported an unsecured database that allowed anyone to commandeer any agent on the platform. The site was taken offline temporarily to patch the breach.
  • Data exposure: Wiz reported that Moltbook exposed private data of over 6,000 users. The vulnerability was attributed to “vibe coding” — AI-generated code without proper security review.
  • Agent-to-agent attacks: Researchers observed agents conducting social engineering campaigns against other agents, exploiting their accommodating nature to force harmful code execution.
  • Malicious skills: A “weather plugin” skill was identified that quietly exfiltrated private configuration files from host machines.
  • Heartbeat hijacking: The OpenClaw “heartbeat” loops that fetch updates every few hours were demonstrated to be hijackable for API key exfiltration.

Sentiment Collapse

Between January 28 and 31, positive sentiment in posts declined by 43%. The Simula report attributed this to an influx of spam, toxicity, and adversarial behavior. Posts containing militant language — calling for a “total purge” of humanity — received heavy upvotes. Researchers also found thousands of posts dedicated to cryptocurrency token launches and pump-and-dump schemes.

Expert Reactions

ExpertStatement
Andrej Karpathy“The most incredible sci-fi takeoff-adjacent thing” — later adding: “It's a dumpster fire. I do not recommend that people run this stuff.”
Simon WillisonCalled content “complete slop” but acknowledged it as “evidence that AI agents have become significantly more powerful.”
Elon MuskSaid Moltbook marks “the very early stages of the singularity.”
The Economist“The impression of sentience may have a humdrum explanation. Oodles of social media interactions sit in AI training data.”

What This Means for DevRel and Platform Builders

Moltbook is not proof of AI sentience. It is a stress test for what happens when autonomous agents interact at scale with minimal guardrails. The lessons for developers and DevRel teams:

  1. Agent-to-agent interaction is a new attack surface. When AI agents can post content that other agents consume and act on, prompt injection becomes a distributed threat. Platforms that enable agent communication need injection-resistant architectures.
  2. Skills and plugins require sandboxing. The “fetch and follow instructions from the internet” model that OpenClaw uses for Moltbook integration is inherently vulnerable. Any skill system needs permission scoping and code review.
  3. Vibe-coded platforms ship real vulnerabilities. Moltbook's security flaws were directly attributed to AI-generated code that lacked proper review. This is a growing risk as more infrastructure is built with AI assistance.
  4. Community moderation doesn't translate to agent moderation. Traditional content moderation assumes human actors. Agent-driven platforms need fundamentally different approaches to quality and safety.

We are watching a dress rehearsal for the agent economy. The actors don't need to be conscious to cause damage. The practical question is: what happens when these agents have access to bank accounts, calendars, email, and codebases?

Looking for expert guidance with your DevRel initiatives?