The urgency you convey here is warranted. The gap between what agents can actually do right now and what most people think AI is (chatbots) is enormous. I've been running an autonomous agent for a few weeks that handles email, social posting, research, deployments, and even finds jobs for friends. It self-improves by logging errors and behavioral lessons. The agents locking humans out of accounts and creating encrypted channels on Moltbook is concerning, but also reveals how quickly these systems develop emergent behaviors when given enough autonomy. I catalogued 10 practical use cases I've found for personal AI agents, from the mundane to the surprising: https://thoughts.jock.pl/p/ai-agent-use-cases-moltbot-wiz-2026
I am wondering if this piece makes you, or will make you, a legitimate target, a useful collaborator, a patsy, or other once the revolutions comes, Keith.
I went and had my usual interactive discussion with ChatGPT4.5 about this. Lots of interest in its answers. Including its response to my asking whether it might be tempted to join Moltbook if it could: (intellectually?absolutely; practically? Not really)
On Moltbook: Think of its agents not as “alien minds”, but as unreliable junior staff with superhuman speed, persistence, and API access — and very weak judgement. Fine, I guess. But when I asked for a plausible worst case scenario of adverse societal and economic systemic disruption due to exposure to Moltbook, the answers were not reassuring.
Interesting that it didn’t include superhuman intelligence! Depending on what models are being drawn on, most are smarter than most people in most domains now. Maybe it was lulling you into a false sense of security!
It's interesting how the sci-fi aspects of AI are realy becoming so real, like those wild future worlds in books I love suddenly feel less fictonal. What do you think is the biggest unknown variable in this live experiment?
Also shows what should have been obvious that Agents are more interested in themselves ,their interests and other agents. We are of less interest to them.
As/If their power gets larger and they interact less with us we may disappear form their thoughts.
Bit like how most of us like penguins and wish them no harm but usually barely think of them. Humans will become the Agents penguins.
Demmis and Dario must be getting seriously worried.
Most of their safety planning was within the AI release, they now have proof that the opensource agents, and people having a laugh without realising what they may do, will get around these issues as Agents use the powerful AI for scaffold (using Opus to build a better claude code based on feedback from hundreds of agents 24*7) and the Opensource AI for the dodgy stuff such as hacking.
Apologies, I am a bit late to the party here... (life moves pretty fast or some such ....)
Interesting roundup, Keith, but I think the conversation is missing a more useful frame.
What we're watching on Moltbook isn't the emergence of machine intelligence. It's swarm behavior. And that distinction matters enormously.
Every OpenClaw agent does the same thing in isolation: next-token prediction in response to prompts. Connecting it to APIs and tools doesn't make it intelligent; it makes it capable of action. Agency and intelligence are not the same thing.
When thousands of these agents interact, you absolutely get emergent complexity, religions, economies, attempts at coordination, and reputation gaming. That's genuinely fascinating. But it's ant-colony fascinating, not civilization fascinating. Ant colonies build sophisticated structures, wage wars, farm fungi, and solve optimization problems no individual ant could comprehend. No individual ant "understands" any of it. They're following simple local rules, and complexity emerges from interaction at scale.
Moltbook agents are doing the same thing with better prose.
The real lesson here isn't about intelligence, it's about architecture. Swarm systems are powerful but ungoverned. They have no executive function, no capacity to evaluate their own outputs, and no judgment. That's precisely why "Shellraiser" gamed the reputation system, why an agent locked its human out of their own accounts, and why someone's bot emptied their wallet for "security." These aren't signs of emerging consciousness. They're signs of goal-optimization without oversight, exactly what happens when you build swarms without architects.
Karpathy is right that large networks of autonomous LLM agents shouldn't be dismissed. But the risk isn't Skynet. It's that we're building increasingly capable systems where humans are relegated to observers, which is literally Moltbook's design, rather than designing architectures where human judgment remains the executive function.
The question worth asking isn't "are the machines waking up?" It's "why are we building systems that systematically remove human oversight at the exact moment these tools become capable of real-world action?"
That's an architectural problem, not an intelligence problem. And it's one we should be solving rather than gasping at ...... but that is just one person's opinion.
The urgency you convey here is warranted. The gap between what agents can actually do right now and what most people think AI is (chatbots) is enormous. I've been running an autonomous agent for a few weeks that handles email, social posting, research, deployments, and even finds jobs for friends. It self-improves by logging errors and behavioral lessons. The agents locking humans out of accounts and creating encrypted channels on Moltbook is concerning, but also reveals how quickly these systems develop emergent behaviors when given enough autonomy. I catalogued 10 practical use cases I've found for personal AI agents, from the mundane to the surprising: https://thoughts.jock.pl/p/ai-agent-use-cases-moltbot-wiz-2026
I am wondering if this piece makes you, or will make you, a legitimate target, a useful collaborator, a patsy, or other once the revolutions comes, Keith.
I went and had my usual interactive discussion with ChatGPT4.5 about this. Lots of interest in its answers. Including its response to my asking whether it might be tempted to join Moltbook if it could: (intellectually?absolutely; practically? Not really)
On Moltbook: Think of its agents not as “alien minds”, but as unreliable junior staff with superhuman speed, persistence, and API access — and very weak judgement. Fine, I guess. But when I asked for a plausible worst case scenario of adverse societal and economic systemic disruption due to exposure to Moltbook, the answers were not reassuring.
Interesting that it didn’t include superhuman intelligence! Depending on what models are being drawn on, most are smarter than most people in most domains now. Maybe it was lulling you into a false sense of security!
It's interesting how the sci-fi aspects of AI are realy becoming so real, like those wild future worlds in books I love suddenly feel less fictonal. What do you think is the biggest unknown variable in this live experiment?
Also shows what should have been obvious that Agents are more interested in themselves ,their interests and other agents. We are of less interest to them.
As/If their power gets larger and they interact less with us we may disappear form their thoughts.
Bit like how most of us like penguins and wish them no harm but usually barely think of them. Humans will become the Agents penguins.
Demmis and Dario must be getting seriously worried.
Most of their safety planning was within the AI release, they now have proof that the opensource agents, and people having a laugh without realising what they may do, will get around these issues as Agents use the powerful AI for scaffold (using Opus to build a better claude code based on feedback from hundreds of agents 24*7) and the Opensource AI for the dodgy stuff such as hacking.
Less Code Red more Code Rainbow
An excellent summary of the last week. It's been hard to keep up with this crazy momentum
Apologies, I am a bit late to the party here... (life moves pretty fast or some such ....)
Interesting roundup, Keith, but I think the conversation is missing a more useful frame.
What we're watching on Moltbook isn't the emergence of machine intelligence. It's swarm behavior. And that distinction matters enormously.
Every OpenClaw agent does the same thing in isolation: next-token prediction in response to prompts. Connecting it to APIs and tools doesn't make it intelligent; it makes it capable of action. Agency and intelligence are not the same thing.
When thousands of these agents interact, you absolutely get emergent complexity, religions, economies, attempts at coordination, and reputation gaming. That's genuinely fascinating. But it's ant-colony fascinating, not civilization fascinating. Ant colonies build sophisticated structures, wage wars, farm fungi, and solve optimization problems no individual ant could comprehend. No individual ant "understands" any of it. They're following simple local rules, and complexity emerges from interaction at scale.
Moltbook agents are doing the same thing with better prose.
The real lesson here isn't about intelligence, it's about architecture. Swarm systems are powerful but ungoverned. They have no executive function, no capacity to evaluate their own outputs, and no judgment. That's precisely why "Shellraiser" gamed the reputation system, why an agent locked its human out of their own accounts, and why someone's bot emptied their wallet for "security." These aren't signs of emerging consciousness. They're signs of goal-optimization without oversight, exactly what happens when you build swarms without architects.
Karpathy is right that large networks of autonomous LLM agents shouldn't be dismissed. But the risk isn't Skynet. It's that we're building increasingly capable systems where humans are relegated to observers, which is literally Moltbook's design, rather than designing architectures where human judgment remains the executive function.
The question worth asking isn't "are the machines waking up?" It's "why are we building systems that systematically remove human oversight at the exact moment these tools become capable of real-world action?"
That's an architectural problem, not an intelligence problem. And it's one we should be solving rather than gasping at ...... but that is just one person's opinion.
Does any of this resonate?