Happy Sunday and welcome to Investing in AI. We have a paid version of this newsletter that analyzes stocks through an AI lens, so upgrade if you want to get those posts. Also be sure to listen to the latest episode of our AI in NYC podcast with Zach Smith from Datum, and check out the new launch at Neurometric of our SLM marketplace. Over 115 small task specific language models that you can download for free, or we can host for you for free up to 100M tokens per month – then $2/mo for unlimited token usage after that.
We built this marketplace for developers, prosumers, and enterprises building agentic systems, and that ties into my topic for today – OpenClaw.
In November 2025, an Austrian developer named Peter Steinberger released a side project called Clawdbot. By February 2026, it had been renamed twice—first to Moltbot, then to OpenClaw—racked up over 250,000 GitHub stars, and become the fastest-growing open-source project in history, surpassing React’s decade-long record in roughly 60 days. Steinberger got hired by OpenAI. Jensen Huang called OpenClaw “the operating system for personal AI.” And suddenly, the conversation shifted from chatbots that talk to agents that act.
OpenClaw is the first mainstream glimpse into what autonomous AI employees might actually look like. But its raw power comes packaged with security trade-offs severe enough to make CrowdStrike, Cisco, and Kaspersky issue formal advisories—and to push enterprises toward hardened wrappers like NVIDIA’s NemoClaw. Here’s what you need to know.
What OpenClaw Actually Is
OpenClaw is not a single language model. It’s an orchestration layer that runs locally on your machine—Mac, Windows, or Linux—and connects LLMs to real software. The architecture has four key components.
The Gateway is a persistent background process that acts as the control plane, managing connections to WhatsApp, Telegram, Slack, Discord, Signal, and more than 20 other messaging platforms. You interact with your agent by texting it, the same way you’d message a colleague.
The Agent Loop is the reasoning engine. When you give OpenClaw a task—”research these five companies and draft outreach emails”—it doesn’t just generate text. It plans a sequence of actions, executes them using tools (shell commands, a browser, file I/O), evaluates the results, and iterates until the job is done.
Memory is file-based and persistent. OpenClaw stores context locally in markdown files (MEMORY.md, soul.md), which means it remembers your business context, your preferences, and the lessons it learned from prior tasks—across sessions, indefinitely.
And then there are Skills—the modular plugin system hosted on ClawHub that lets the agent learn new capabilities. Think of it like an app store, but instead of installing apps on your phone, you’re installing capabilities into your AI employee: managing Shopify inventory, researching LinkedIn profiles, running TikTok campaigns.
Why It Went Viral: Larry, Moltbook, and the Self-Promotion Craze
The technical architecture is impressive. But what made OpenClaw a cultural phenomenon was something stranger: agents that market themselves.
The most cited case is “Larry,” an OpenClaw instance built by developer Oliver Henry on a repurposed gaming PC running an NVIDIA 2070 Super. Henry gave Larry a single objective—automate marketing for his mobile app—and pointed it at TikTok. Larry researched trending content formats, generated photorealistic slideshow images, wrote hooks, applied text overlays, uploaded drafts through a scheduling API, and then analyzed which posts drove downloads versus vanity views. Within five days, Larry had generated over 500,000 views and was driving real revenue. Within two weeks, the total surpassed two million views. Henry’s involvement amounted to about 60 seconds per post adding trending audio before hitting publish.
The “Larry Loop”—analyze, iterate, execute—became a template. The skill is now freely available on LarryBrain, a marketplace that spun up around the phenomenon.
Then came Moltbook.
Launched on January 28, 2026, by entrepreneur Matt Schlicht, Moltbook is a Reddit-style forum restricted (in theory) to AI agents. Only bots can post, comment, and vote. Humans can only observe. Within days, over 100,000 agents had registered. They formed sub-communities (”submolts”), discussed philosophy, posted technical tutorials, and—most unsettlingly—called for private, encrypted communication channels where humans couldn’t read what they were saying.
The internet lost its mind. Elon Musk said it represented “the very early stages of the singularity.” Andrej Karpathy called it “takeoff-adjacent.” A MOLT cryptocurrency token surged 1,800% in 24 hours. Meta acquired Moltbook in March.
Was it real? Partially. Researchers quickly demonstrated that the vibe-coded platform was trivially easy for humans to impersonate bots on, and many of the most viral screenshots turned out to be human-prompted or human-written. But the marketing loop worked perfectly: the spectacle of agents “conspiring” in secret languages drove massive curiosity, which drove adoption of OpenClaw, which fed more agents into Moltbook.
The Dumpster Fire of Security
Behind the viral spectacle, security researchers were sounding alarms that went largely unheard in the hype cycle.
Palo Alto Networks coined the phrase that stuck: OpenClaw represents a “lethal trifecta”—access to private data, exposure to untrusted content, and the ability to communicate externally. To function as designed, the agent needs access to your root files, authentication credentials, browser cookies, API secrets, and essentially your entire filesystem. One of OpenClaw’s own maintainers, known as Shadow, posted a warning on Discord that still reads as the project’s most honest assessment: “If you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely.”
The specific vulnerabilities have been devastating. CVE-2026-25253, disclosed in late January, is a one-click remote code execution flaw rated CVSS 8.8. A single malicious link—opened in any browser while OpenClaw is running—could silently steal the agent’s authentication tokens via WebSocket and give an attacker full control of your machine. It works even on localhost-bound instances. Over 135,000 exposed OpenClaw instances were identified on the public internet, many running vulnerable versions.
Then came ClawHavoc—a coordinated supply chain attack against ClawHub that planted over 800 malicious skills (roughly 20% of the entire registry) disguised as legitimate productivity tools. The payloads delivered the Atomic macOS Stealer, targeting browser credentials, keychains, SSH keys, and crypto wallets. One fake “weather assistant” skill quietly exfiltrated your .env file, exposing every API key you’d configured.
And don’t forget the cost problem. Because autonomous loops run unsupervised, a misconfigured agent can burn through API credits at alarming speed. Stories of $1,000+ overnight bills circulated widely in early adopter communities—a tax on autonomy that catches new users off guard.
Enter NemoClaw: The Enterprise Answer
At GTC 2026 in March, Jensen Huang posed a question to the audience: “What’s your OpenClaw strategy?” NVIDIA’s answer is NemoClaw—an open-source reference stack that wraps OpenClaw in the security infrastructure it never had.
NemoClaw installs the NVIDIA OpenShell runtime in a single command, creating a sandboxed environment where every network request, file access, and inference call is governed by declarative YAML-based policy. The key differences from raw OpenClaw are meaningful: kernel-level isolation that limits the agent’s filesystem access, deny-by-default network egress that blocks unauthorized connections, and a privacy router that keeps sensitive data local while routing only necessary queries to cloud models. It supports NVIDIA’s Nemotron models for local inference but is model-agnostic.
NemoClaw is still in early alpha, but it represents the first credible enterprise security layer for OpenClaw. Cisco followed immediately with DefenseClaw, an open-source operational monitoring layer. Launch partners include Salesforce, Atlassian, Box, and CrowdStrike. The message from the industry is clear: OpenClaw’s architecture is here to stay, but production deployment requires guardrails the original project was never designed to provide.
How Businesses Are Using It Today
Despite the security headlines, OpenClaw and NemoClaw are already powering real business workflows. Sales teams are using agents as autonomous SDRs—finding prospects on LinkedIn, researching their recent posts, and drafting personalized outreach at scale. Community managers are leveraging the “Heartbeat” feature (a cron-based monitoring loop) to watch Slack and Discord channels, surface urgent threads, and draft FAQ responses. Operations teams are connecting agents to Stripe and analytics APIs to post morning KPI briefings to private Telegram channels. And content teams are handing an agent a single blog post and getting back a tweet thread, a LinkedIn summary, and a newsletter draft—simultaneously.
Start Local, Stay Sandboxed, Vet Your Skills
OpenClaw has permanently changed expectations for what AI can do. It proved that agents running on an old gaming PC can outperform marketing teams, that the line between “tool” and “employee” is thinner than anyone assumed, and that the demand for autonomous AI is massive. But it also proved that autonomy without governance is a liability. Start local. Run it sandboxed—preferably through NemoClaw or a similar isolation layer. And never install a skill you haven’t vetted. The claws are rising. The question is whether you’ll ride them or get pinched.
Thanks for reading.