OpenClaw is a free, open-source AI personal assistant that can manage emails, browse the web, and automate tasks across WhatsApp, Telegram, and Slack. It runs on your own computer, which sounds great—until you learn that security researchers found hundreds of exposed servers leaking credentials and conversation histories. One in five enterprises already have it installed. Before you join them, here's what the demo won't tell you. See 'The Security Problem Nobody Mentions' below.

Hundreds of Clawdbot servers got exposed because of a basic reverse proxy misconfiguration — authentication that worked fine locally became completely bypassable once the tool hit the internet. Sean breaks this down well, and it's a good reality check if you're considering deploying OpenClaw. I've pulled the key takeaways below.

Clawdbot Hype Needs A Reality Check

The Tool That Promises to Do Everything

I've been watching this tool spread through companies for months now. It started showing up in IT security reports, then in support tickets, and finally in conversations with business owners who had no idea their employees had installed it.

OpenClaw—which has gone through more name changes than a witness protection participant (Clawdbot, then Moltbot, now OpenClaw)—promises something genuinely appealing. A free AI assistant that lives on your computer and actually does things. Not just answers questions. Does things. Sends emails. Books appointments. Runs scripts. Manages your calendar.

The pitch is compelling: Why pay $20/month for ChatGPT when you can run your own autonomous AI agent for free? Why trust your data to a cloud company when you can keep everything local?

Hold that thought about 'keeping everything local.' The answer isn't what you'd expect, and it's the reason I'm writing this instead of recommending you install it today.

What OpenClaw Actually Does

Let me explain what this tool is before we get into why it's causing security teams to lose sleep.

OpenClaw is what's called an open source AI agent. Unlike ChatGPT or Claude, which you access through a website, OpenClaw runs on your own machine—your laptop, your server, your home computer. You install it, connect it to your messaging apps, and then you can text it commands like you'd text an employee.

Send it a WhatsApp message: 'Check my email for anything from Johnson Supply and summarize it.' It does.

Text it on Telegram: 'Draft a follow-up email to everyone who attended Tuesday's meeting.' Done.

Message it on Slack: 'Find flights to Denver next Thursday under $400.' It searches, compares, and reports back.

The architecture has two pieces. The Gateway handles the actual work—routing messages, calling AI models, managing your credentials, executing tasks. The Control panel is a web interface where you configure everything, connect your accounts, and manage API keys.

It's essentially an AI personal assistant you control completely. No monthly fees to OpenAI (beyond the API usage). No data leaving your network (in theory). No corporate policy violations (also in theory).

Over 60,000 people have starred it on GitHub. It's one of the fastest-growing AI tools in the open source world.

And it's a security nightmare.

The Security Problem Nobody Mentions in the Demo

Flick the lightbulb mascot examines a cracked server with a magnifying glass, eyebrows raised in alarm at exposed vulnerab...
Before you install, know what you're opening up—because not all cracks are meant to be there.

Here's where we get to the part I promised—why 'running everything locally' isn't the protection it sounds like.

Security researchers discovered severe misconfigurations affecting hundreds of publicly exposed OpenClaw control servers. Not theoretical vulnerabilities. Actual exposed servers. Leaking credentials. Conversation histories visible to anyone who knew where to look. Complete system control available to attackers.

The vulnerability stems from authentication bypass conditions created when OpenClaw operates behind misconfigured reverse proxies. In plain English: the tool assumes it's only accessible from your own computer, but when people expose it to the internet (which many do, to access it from their phone), that assumption becomes a wide-open door.

The Authentication Flaw

OpenClaw was designed for localhost trust—the assumption that if someone can reach the control panel, they must be sitting at the computer. That's fine when it's truly local. But the whole appeal of the tool is accessing it from your phone via WhatsApp or Telegram.

To make that work, people put it behind a reverse proxy (software that routes internet traffic to local applications). When that proxy is misconfigured—which is easy to do if you're not a server administrator—the localhost trust assumption becomes internet-wide exposure.

Suddenly, your 'local' AI assistant is accessible to anyone who scans for it.

The Plaintext Credentials Problem

By design, OpenClaw stores credentials locally in plaintext. That means your email password, your calendar access, your messaging platform tokens—all sitting in readable text files on whatever computer runs the agent.

For a tool that requires broad access to your most sensitive systems—email, calendar, documents, messaging platforms—this is a significant risk. If anyone gains access to that machine, they get everything the agent can access.

The Runaway Cost Problem

Security isn't just about hackers. It's about control.

Users have reported spending thousands of dollars overnight when their agent got stuck in loops. One documented case: an agent pinging OpenAI every 20 seconds asking for weather information. Another: an agent that kept retrying a failed task hundreds of times, burning through API credits.

When you give an autonomous AI agent access to paid services—AI APIs, email accounts, booking systems—and it goes rogue, your credit card goes with it.

Why One in Five Companies Already Have This Installed

Here's the uncomfortable reality for business owners: one in five enterprises already have OpenClaw installed somewhere in their organization. Not approved by IT. Not secured by your security team. Installed by an employee who wanted a better AI assistant than what corporate provides.

This is shadow AI in its purest form. The same phenomenon that had half your team using AI tools you don't know about, except this time the tool has access to email credentials, messaging accounts, and the ability to execute code.

The tool got a 12/100 'Scam' trust score from some security evaluation tools. Not because it's actually a scam—it does what it promises—but because the security posture is so poor that automated tools flag it as dangerous.

One security researcher put it bluntly: it's currently a 'Devs-only' playground. Don't let it near your private data unless you know how to harden a server.

The Part That Makes This Different From Other AI Tools

I promised to explain why the security risk here is different from, say, using ChatGPT carelessly. Here it is.

When you use ChatGPT badly, you might leak confidential information to OpenAI's servers. That's a data privacy problem. Serious, but contained.

When OpenClaw is misconfigured, attackers get your credentials—the keys to take actions as you. They can send emails from your account. Access your files. Execute commands on your machine. The exposure isn't just data. It's capability.

The technical innovation of OpenClaw isn't some breakthrough AI model. It's making agent functionality accessible via messaging apps through a gateway server. The researchers who reviewed it said it plainly: at the end of the day, this is an agent loop you access from your phone. Not shocking new technology. Just a clever interface.

That clever interface, though, is connected to real systems with real consequences.

How to Evaluate If OpenClaw Is Right for Your Business

I'm not saying don't use it. I'm saying know what you're getting into.

OpenClaw's own documentation recommends using Anthropic's Claude models for better prompt-injection resistance—recognition that the tool connects to real messaging surfaces where inbound messages should be treated as untrusted input. If someone sends your agent a malicious message, the agent might follow harmful instructions.

The governance-containment gap is the #1 enterprise AI security risk heading into 2026. Here's what that means in numbers: 58-59% of organizations report having monitoring and human oversight for AI tools. But only 37-40% have true containment controls—purpose binding (the AI can only do specific things) and kill-switch capability (you can stop it immediately).

OpenClaw, by default, has neither.

  • **If you're a developer** who understands server hardening, reverse proxy configuration, and can implement proper authentication—OpenClaw might be a useful experimentation platform.
  • **If you're a business owner** without dedicated IT security staff—this tool poses risks that likely outweigh the benefits of free AI assistance.
  • **If you're an employee** considering installing this on a work computer—please talk to IT first. You may be exposing your company to significant liability.

The Hidden Costs of 'Free' Open Source AI Agents

Flick the lightbulb mascot races toward a red-zone meter with alarmed expression, hands outstretched, as API tokens drain ...
When the meter's in the red and tokens are draining fast, even speed won't save you from a runaway API bill.

Open source doesn't mean free. It means you pay in different currencies.

The real costs of 'free' AI agents: API fees (you still pay OpenAI or Anthropic per request), time spent configuring and securing, risk exposure from misconfigurations, and potential cleanup costs if something goes wrong. Factor in 10-20 hours of setup time for someone who knows what they're doing.

You still pay for AI API calls—every time OpenClaw talks to Claude or GPT-4, that's your API key and your credit card. Heavy users report $50-200/month in API costs, which starts approaching what you'd pay for a commercial AI assistant with actual security controls.

You pay in configuration time. Setting this up securely isn't a weekend project for a non-technical user. It requires understanding Docker, reverse proxies, authentication, and server hardening.

You pay in risk. Every hour this tool runs misconfigured is an hour your credentials are potentially exposed.

Generalized AI agents still require proper skill definitions and subject matter expertise to work effectively. You have to define what tasks it can do, build the tools it uses, create the memory structures it needs. Poor skill definitions result in wasted money and ineffective task execution regardless of how good the underlying system is.

What to Do This Week If You're Considering OpenClaw

Here's the practical path forward, depending on your situation:

  1. **If you already have OpenClaw installed:** Check if it's internet-accessible. If you can reach the control panel from outside your network, it's likely misconfigured. Disable internet access immediately.
  2. **If you're a business owner:** Ask IT to scan for OpenClaw/Moltbot/Clawdbot installations. Given that one in five enterprises have it installed, assume someone in your organization might have it.
  3. **If you're evaluating AI assistants:** Compare the total cost of ownership. A $20/month commercial tool with proper security might be cheaper than a 'free' tool that costs you 20 hours of setup plus security risk.
  4. **If you still want to proceed:** Run OpenClaw in a sandbox environment—not on your actual machine. Never give it credentials you can't afford to lose. Set up spending alerts on any API keys it uses (cap at $50/day to start).
  5. **If you need the functionality but not the risk:** Look at commercial alternatives like Anthropic's Claude with MCP (Model Context Protocol) tools, which offer similar automation with enterprise security controls.
The baseline security test: If you wouldn't give this level of access to a contractor you just met, don't give it to an AI agent running on exposed infrastructure. Treat OpenClaw like a powerful but unvetted employee—capable, but requiring supervision.

What This Means for Your AI Security Strategy

  • **Shadow AI is now shadow infrastructure.** Tools like OpenClaw aren't just apps—they're servers running on your network with broad credential access. Your AI policy needs to address this explicitly.
  • **'Local' doesn't mean 'secure.'** The assumption that self-hosted tools are safer than cloud tools is often backwards. Cloud providers have security teams. Your marketing manager installing OpenClaw on their laptop does not.
  • **The governance gap is widening.** 58% of companies have AI monitoring. Only 37% have containment. That gap is where tools like OpenClaw operate undetected.
  • **Free tools have hidden costs.** API fees, configuration time, and security risk often exceed the cost of commercial alternatives with proper security controls.
  • **AI agent security will become a board-level concern.** When an autonomous AI agent can send emails as your CEO, the risk profile changes dramatically.

The broader lesson here applies beyond OpenClaw. As AI agents become ready for real work, the security considerations become more serious. An AI that can only answer questions has limited blast radius. An AI that can take actions has unlimited blast radius.


Frequently Asked Questions About OpenClaw

Flick the lightbulb mascot contemplates two paths at a fork: one cracked with caution symbols, one smooth with safety nodes.
Before you race down the automation highway, make sure you're choosing the path that protects your data—not just the one that gets you there fastest.

Is OpenClaw safe to use for my business?

Not without significant security expertise. OpenClaw stores credentials in plaintext and has documented authentication vulnerabilities. Security researchers found hundreds of exposed servers leaking sensitive data. Unless you have IT staff who can properly harden the installation, commercial AI assistants with enterprise security controls are a safer choice.

What's the difference between OpenClaw, Clawdbot, and Moltbot?

They're all the same tool under different names. It started as Clawdbot, was renamed to Moltbot, and is now OpenClaw. The configuration files still use the old ~/.clawdbot/ path for backward compatibility. All three names refer to the same open source AI agent project.

How much does OpenClaw actually cost to run?

The software is free, but you pay for AI API calls. Heavy users report $50-200/month in API costs to OpenAI or Anthropic. Add 10-20 hours of setup time for someone with server administration experience. If something goes wrong, cleanup and remediation can cost significantly more.

Can OpenClaw really replace a human assistant?

Partially. It can handle routine tasks like email summarization, scheduling, and web research. But you need subject matter expertise to define what it does well. Poor skill definitions result in wasted money and bad outputs. It's a power tool, not a replacement for judgment.

What should I do if I find OpenClaw installed at my company?

First, check if it's internet-accessible. If the control panel is reachable from outside your network, disable internet access immediately. Then audit what credentials it has access to and rotate any exposed passwords or API keys. Finally, establish a clear policy about AI tool installation.

Sources

Share this post