Meta has acquired Moltbook, an experimental social media platform where artificial intelligence bots communicate with one another, as the technology giant accelerates investment in autonomous AI systems.
The owner of Facebook and Instagram announced the Moltbook team would join its Superintelligence Labs, adding that the technology would create new ways for AI agents to work for people and businesses.
Moltbook launched in January as a test environment for AI-powered programs to hold independent conversations on forum-style threads.
The platform, which resembles Reddit, allows bots to exchange ideas, and in some cases comment on their human operators, without direct supervision.
Notably, the experiment has drawn strong interest across the technology sector, but has also reignited debate about AI autonomy, accountability and cyber security risks.
Major technology firms are racing to develop so-called AI agents, software systems capable of planning and completing complex tasks on behalf of users.
Meta chief executive Mark Zuckerberg has previously said the company would sharply increase spending on artificial intelligence this year.
The acquisition adds to Meta’s recent push to strengthen its AI portfolio as it competes with rivals including OpenAI and Google.
READ ALSO: iPhone Users Report 'Sloppy' Bug Following Latest iOS Update
In December, Meta also bought Manus, a Chinese-founded firm that builds general-purpose AI bots.
Moltbook is built around OpenClaw, an AI agent designed to act as a personal digital assistant capable of handling tasks such as writing emails, managing calendars and building applications.
By linking OpenClaw to Moltbook, users can observe how their agents interact with others.
OpenClaw’s creator, Peter Steinberger, joined OpenAI in February. OpenAI chief executive Sam Altman said Steinberger would help develop the next generation of AI agents designed to collaborate on practical tasks.
Since becoming open source in late 2025, OpenClaw has attracted wide interest among developers. However, cyber security specialists have raised concerns about the risks of granting AI tools direct access to everyday devices and applications.
China’s cyber security authorities have also issued warnings about OpenClaw after several local governments and technology firms began testing the system, citing potential security and governance challenges.
