OpenClaw is a self-hosted autonomous AI agent framework whose core loop routes your messages through a lightweight Gateway on your own hardware to an LLM, which can then call tools (files, shell, web, APIs, schedulers) repeatedly until it finishes the task.
Architecture
- You interact via channels like Telegram, WhatsApp, or Discord; messages hit the always-on Gateway running on your server.
- The Gateway sends your message plus full context (personality, user info, memories, operating rules) to the LLM, which may reply directly or initiate tool calls.
- Tool calls let the agent read/write files, run shell commands, browse, call APIs, and schedule tasks in multi-step loops (e.g., inspect logs, draft an email, then send it via Gmail).
- The Gateway itself just routes, schedules, and manages platforms with low resource use (typically under 500MB RAM), while all “thinking” happens in the LLM.
- Everything runs on your own VPS or local machine, so your data (conversations, files, memories) stays on your hardware except for the LLM API call.
Brain files
- Each agent is defined by a set of “brain” files that load into every conversation and control behavior.
- SOUL.md defines personality, tone, boundaries, and style; changing this immediately changes how the agent talks.
- USER.md stores who you are, your business, goals, preferences, and context so the agent can act with fewer clarifying questions.
- MEMORY.md is curated long-term memory (decisions, lessons, projects, client details, preferences) and should be a reference, not a chronological diary.
- AGENTS.md is the operating manual: safety boundaries, file conventions, communication rules, memory management, and proactive/idle behavior.
- TOOLS.md keeps environment-specific config like accounts, IPs, hosts, and device names (not instructions).
- HEARTBEAT.md is a recurring checklist the agent follows on its scheduled “heartbeat” cycles (e.g., check email, review calendar, monitor leads).
Why OpenClaw is different
- Ownership: the agent runs on your hardware with no SaaS platform fee or vendor lock-in, and it keeps working even if the OpenClaw project disappears.
- Memory: instead of short-lived chat memory, it can retain arbitrary, file-based records of projects, decisions, and relationships without context-window limits.
- Accumulation: the agent can turn successful workflows into reusable “skills,” so over weeks and months it compounds into a specialist with many refined, business-specific capabilities.
How to practically get OpenClaw running, connect core tools, choose an Anthropic tier, and properly onboard the agent so it really understands you and your business.
Hardware choices
- Recommends a VPS (e.g., Hetzner CX43) as the default: at least 4 vCPUs, 8–16GB RAM, 80GB+ storage, always-on, globally accessible, and easier to secure and maintain.
- Mac Mini is for extreme data sensitivity or macOS integration, but you must manage uptime, networking, and backups yourself, so most users are better off starting on a VPS.
“AI installs AI” setup
- Uses Anthropic’s Claude Code CLI to fully install OpenClaw with almost no Linux/DevOps knowledge: you install the CLI, create a Telegram bot, provide server IP, SSH key, bot token, and Claude setup token.
- Claude Code SSHs in, installs dependencies, sets up the gateway, bot, and workspace in ~15 minutes; convenience is high but you are granting root, so later hardening (firewall, SSH, Tailscale, permissions) is recommended.
Essential first integrations
- Create a dedicated Gmail for the agent and use it for all related services so you can manage and revoke access cleanly.
- Connect, in order: Groq Whisper for free voice transcription, SearXNG for self-hosted search, Google Workspace (Gmail/Calendar/Drive) for business automation, then GitHub and Vercel so the agent can deploy and maintain live projects.
Anthropic subscription tiers
- OpenClaw is free; the main cost is the Claude subscription: Pro at $20/month (testing only), Max at $100/month (recommended for most daily users), Max 2x at $200/month (heavy, multi–sub-agent automation).
- You start at a lower tier and upgrade if you hit limits; once the agent is doing real work, the ROI typically becomes obvious.
Onboarding the agent
- The most important step is a deep onboarding interview so the agent can populate USER.md with rich context about your business, goals, workflow, clients, and tools.
- You can feed it exported ChatGPT history to learn your patterns, then ask it to propose automations; the recommended collaboration pattern is: explain, let it plan, agree, then execute.
OpenClaw “skills” are modular, reusable instruction sets that stay out of the main context until needed, letting your agent get smarter, faster, and cheaper to run over time.
What skills are
- Skills live under skills/, each with its own SKILL.md plus any templates, scripts, or configs it needs.
- They hold specialized instructions that only load for relevant tasks, avoiding bloated brain files and unnecessary token costs on every interaction.
How to create skills
- Method 1: From scratch – describe a desired capability (e.g., LinkedIn posts), and the agent designs and tests a new skill from a blank slate.
- Method 2: From a completed task – after the agent does something well, you say “turn this into a skill,” and it captures the exact working process for reuse.
- Method 3: Explain and refine – you walk it through your existing workflow step by step, and it converts that into a structured, automatable skill.
Feedback and refinement
- Skills improve via a feedback loop: use the skill, critique the output, and explicitly tell the agent to update the skill so the change is permanent.
- Over many small refinements, skills accumulate your preferences and judgment, eventually matching your standards better than a new human hire.
Community skills and security
- Key community skills include QMD (token-efficient context search), SuperMemory (richer long-term memory), Agent Browser (web interaction), Frontend Design (site building), and Prompt Injection Defense (group-chat safety).
- Third-party skills have full agent-level access, so you should always read their SKILL.md, use tools like Cisco’s Skill Scanner, and “trust but verify” before installing.
OpenClaw results come from treating the agent like a smart junior employee: slow down, co-plan, and give specific, permanent feedback instead of just “better prompts.”
Flesh out before execution
- Use a four-step workflow: explain your idea, let the agent ask clarifying questions, agree on the plan, then execute.
- Invest more definition for bigger tasks; if work will take >10 minutes, spend at least 2 minutes clarifying goals, audience, constraints, and success criteria.
Meta prompting
- Let the agent write its own prompts: you state the outcome, it designs the optimal prompt, you review, then it runs it.
- Iterate: have it rewrite and test prompts a few times, then store the “battle-tested” version in a skill (e.g., for cold outreach, LinkedIn posts, client reports).
Feedback that actually improves skills
- Vague feedback (“too long”) only fixes the current output; precise feedback with constraints (word counts, tone, structure) plus “update the skill” permanently upgrades behavior.
- Consistent, specific feedback over months compounds, making the agent dramatically more capable than one that only gets generic corrections.
Copy-paste prompt toolkit
- Provides ready-made prompts for brainstorming use cases, creating new skills, and auditing context files (AGENTS/SOUL/USER/TOOLS/MEMORY/HEARTBEAT) to trim, move, or delete content.
- These prompts help you continuously discover new automations, formalize workflows into skills, and keep your core context lean and efficient.
With OpenClaw, serious work should start with a written plan and a build log, especially when multiple agents are involved.
Why planning first
- Any project with more than three steps or two hours of work should begin with a plan document the agent and you can reference.
- The plan should define problem, scope (in/out), approach, risks, success criteria, and a task list broken into single-session steps.
Build logs
- For any complex build, create memory/build-[project-name].md and log each step, what worked, what broke, and what’s next.
- If sessions crash or time out, the next run reads the log and continues; for sub-agents, the build log is the permanent source of truth instead of chat history.
Sub-agents and scale
- Use “one feature per spawn”: each sub-agent gets a small, discrete task instead of “build the whole thing.”
- Every sub-agent must read and update the build log; the main agent monitors progress via the log and intervenes if a sub-agent is stuck with no log updates.
How to keep your OpenClaw agent fast, cheap, and sharp by trimming context bloat and routing work to the right models.
Token bloat and audits
- All brain files load on every message, so oversized AGENTS/SOUL/USER/TOOLS/MEMORY/HEARTBEAT make responses slower, worse, and more expensive.
- A monthly “audit prompt” lets the agent analyze each file, move process instructions into skills, delete outdated content, compress verbose sections, and can cut context size by more than half.
Context file best practices
- TOOLS.md should only hold quick-reference identifiers (accounts, hosts), never how-to content, which belongs in skills.
- MEMORY.md should be a curated reference: archive chronological notes to separate files and keep only active, high-value context plus an index.
- HEARTBEAT.md should be a short recurring checklist, with complex instructions moved into skills; any file over ~100 lines is a strong warning sign of bloat.
Smart model routing
- Use a fast, cheap model (e.g., Haiku-class) for simple lookups and routine automation, a mid-tier model (Sonnet-class) for most drafting and coding, and a top-tier model (Opus-class) only for complex or security-sensitive work.
- This routing can drop costs dramatically versus running everything on the top model, while Opus-class models remain preferred where prompt-injection resistance and critical reasoning matter most.
How to turn OpenClaw from a reactive chatbot into a proactive “24/7 employee” that checks things, acts, and messages you first.
Proactive behavior
- The goal is a proactive agent that monitors your world, surfaces problems/opportunities, and takes initiative instead of waiting for commands.
Heartbeats and cron jobs
- Heartbeats are periodic check-ins (e.g., every 30 minutes) where the agent runs through HEARTBEAT.md for email, calendar, leads, etc., at a cost of only pennies per day.
- Cron jobs are precise-time tasks (morning reports, content reminders, nightly builders, analytics runs), while heartbeats handle looser periodic checks and batching.
Morning reports and accountability
- A daily morning report (via cron) gives a <300-word brief on today’s calendar, key emails, lead changes, content due, quick wins, and overnight work.
- Accountability mode uses several daily check-in cron jobs with escalating tone, tracks completion streaks in a file, and leverages that streak to keep you executing your tasks.
Overnight work and lead monitoring
- A nightly builder cron job picks a suitable idea from your list, builds it while you sleep, logs it, and marks it done; you can also batch overnight research with staggered sub-agents.
- Lead monitoring covers instant booking alerts plus auto-research, CRM tracking of stale leads, and “opportunity scouting” across forums and social platforms for prospects mentioning your niche.
How to turn one general OpenClaw agent into a managed “AI team” of specialist sub-agents that are cheaper, safer, and higher quality for focused work.
Core idea
- Your main agent acts like a CEO that delegates to specialist sub-agents (employees) with their own SOUL.md and AGENTS.md optimized for a single role (writer, researcher, builder, etc.).
What sub-agents to build
- Start with 2–3 sub-agents around your biggest bottlenecks, such as Website Builder, SEO Agent, Competitor Research, Content Writer, Outreach Agent, and Research Agent, then expand as needed.
Creating and managing them
- You create sub-agents conversationally via a “sub-agent creation” prompt; the main agent sets up workspace, SOUL.md, AGENTS.md, and required skills, then suggests improvements.
- Management rules: stagger spawns 20–30 minutes apart, never give sub-agents heartbeats, treat them as ephemeral (one job then shut down), and monitor their progress via the main agent’s heartbeat and the shared build log.
Example workflows and benefits
- Practical flows include overnight research batches, “website in 10 minutes,” multi-step content pipelines, and lead-generation machines with research + outreach sub-agents.
- Sub-agents isolate information (only the context they need), save tokens versus the fully loaded main agent, and often produce better specialist output due to their narrow, focused context.
How people use OpenClaw in practice, especially for SEO, content, and ops, and which integrations and prompts make it pay off in real business terms.
SEO and content machines
- OpenClaw can do full SEO pipelines: technical site audits, keyword research from Search Console, competitor gap analysis, and programmatic SEO pages at scale, plus continuous monitoring and reports.
- It can run a YouTube pipeline with sub-agents for idea research, scripting in your voice, B‑roll and thumbnail planning, Drive organization, and performance-based skill refinement after each upload.
Operations and lead handling
- For service businesses, it acts as an ops manager: booking alerts with auto-research briefs, continuous Gmail triage and drafting, CRM updates, and proactive business notifications through the day.
- Content accountability mode plus a LinkedIn post bank keeps you posting consistently via scheduled prompts and pre-approved posts.
Integrations and industry examples
- High-impact integrations include Google Drive (structure, storage, backups), GA4 (traffic and anomaly reports), and Search Console (rank and CTR insights), with a preference for API first, browser automation second, community skills third.
- Examples include real estate transaction automation, AI consultancies having it run SEO and reporting across clients, and “meatspace” workflows where the AI hires humans via APIs for on-site tasks.
Meta prompts for any business
- Reusable “Revenue Roadmap” and “Content Machine Design” prompts let the agent design 90‑day growth plans and near-fully automated content systems tailored to your business, with you spending minimal weekly time.
Why OpenClaw security is a TOP Priority and a concrete hardening plan so your agent cannot be hijacked.
Why security is critical
- OpenClaw instances have been found widely exposed with auth bypasses, malware-laced skills, and targeted trojans, and a compromised agent effectively gives an attacker full access to your shell, files, email, and connected services.
12-step hardening checklist
- Core steps include: running openclaw security audit (and –deep/–fix), enforcing a gateway auth token, locking DM pairing and group allowlists, tightening Telegram via BotFather, and enabling a deny-all UFW firewall with SSH only.
- You also lock file permissions (600/700), enable log redaction, sandbox sub-agents with denyTools, move access behind a Tailscale mesh (then close public SSH), rigorously vet skills, use strong models for untrusted input, and review configs and logs monthly.
Tailscale and Mac-specific hardening
- Tailscale is recommended as the biggest single security upgrade: install on local + VPS, confirm SSH over the 100.x IP, then remove public SSH so only Tailscale devices can reach the server.
- On Mac Mini, you also enable FileVault, firewall + stealth mode, disable auto-login and unneeded sharing, ensure Homebrew services bind to localhost only, and keep SIP enabled.
Golden rules
- Never rely on the agent to audit its own perimeter; always verify from a separate machine.
- Always read SKILL.md yourself before installing community skills, and apply least-privilege everywhere (connections, sub-agents, skills) to minimize the blast radius of any compromise; capture a secure baseline config once you are hardened.