Moltbot
Page 1 Moltbot https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistan ts-looks-like/ https://github.com/moltbot/moltbot.git https://www.youtube.com/watch?v=MUDvwqJWWIw https://www.youtube.com/watch?v=U8kXfk8enrY Start Here Getting started Goal: go from zero → first working chat (with sane defaults) as quickly as possible. Fastest chat: open the Control UI (no channel setup needed). Run moltbot dashboard and chat in the browser, or open http://127.0.0.1:18789/ on the gateway host. Docs: Dashboard and Control UI.
Recommended path: use the CLI onboarding wizard (moltbot onboard). It sets up: model/auth (OAuth recommended) gateway settings channels (WhatsApp/Telegram/Discord/Mattermost (plugin)/…) pairing defaults (secure DMs) workspace bootstrap + skills optional background service If you want the deeper reference pages, jump to: Wizard, Setup, Pairing, Security. Sandboxing note: agents.defaults.sandbox.mode: "non-main" uses session.mainKey (default "main"), so group/channel sessions are sandboxed. If you want the main agent to always run on host, set an explicit per-agent override: None { "routing": { "agents": { "main": { "workspace": "~/clawd", "sandbox": { "mode": "off" } } } } ## Page 2 } 0) Prereqs Node >=22 pnpm (optional; recommended if you build from source) Recommended: Brave Search API key for web search.
Easiest path: moltbot configure --section web (stores tools.web.search.apiKey). See Web tools. macOS: if you plan to build the apps, install Xcode / CLT. For the CLI + gateway only, Node is enough.
Windows: use WSL2 (Ubuntu recommended). WSL2 is strongly recommended; native Windows is untested, more problematic, and has poorer tool compatibility. Install WSL2 first, then run the Linux steps inside WSL. See Windows (WSL2).
1) Install the CLI (recommended) None curl -fsSL https://molt.bot/install.sh | bash Installer options (install method, non-interactive, from GitHub): Install. Windows (PowerShell): None iwr -useb https://molt.bot/install.ps1 | iex Alternative (global install): None npm install -g moltbot@latest ## Page 3 None pnpm add -g moltbot@latest 2) Run the onboarding wizard (and install the service) None moltbot onboard --install-daemon What you’ll choose: Local vs Remote gateway Auth: OpenAI Code (Codex) subscription (OAuth) or API keys. For Anthropic we recommend an API key; claude setup-token is also supported. Providers: WhatsApp QR login, Telegram/Discord bot tokens, Mattermost plugin tokens, etc.
Daemon: background install (launchd/systemd; WSL2 uses systemd) Runtime: Node (recommended; required for WhatsApp/Telegram). Bun is not recommended. Gateway token: the wizard generates one by default (even on loopback) and stores it in gateway.auth.token. Wizard doc: Wizard Auth: where it lives (important) Recommended Anthropic path: set an API key (wizard can store it for service use).
claude setup-token is also supported if you want to reuse Claude Code credentials. OAuth credentials (legacy import): ~/.clawdbot/credentials/oauth.json Auth profiles (OAuth + API keys): ~/.clawdbot/agents/
If you use WhatsApp or Telegram, run the Gateway with Node. 3.5) Quick verify (2 min) None moltbot status moltbot health moltbot security audit --deep ## Page 5 4) Pair + connect your first chat surface WhatsApp (QR login) None moltbot channels login Scan via WhatsApp → Settings → Linked Devices. WhatsApp doc: WhatsApp Telegram / Discord / others The wizard can write tokens/config for you. If you prefer manual config, start with: Telegram: Telegram Discord: Discord Mattermost (plugin): Mattermost Telegram DM tip: your first DM returns a pairing code.
Approve it (see next step) or the bot won’t respond. 5) DM safety (pairing approvals) Default posture: unknown DMs get a short code and messages are not processed until approved. If your first DM gets no reply, approve the pairing: None moltbot pairing list whatsapp moltbot pairing approve whatsapp
Pairing doc: Pairing ## Page 6 From source (development) If you’re hacking on Moltbot itself, run from source: None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm ui:build # auto-installs UI deps on first run pnpm build moltbot onboard --install-daemon If you don’t have a global install yet, run the onboarding step via pnpm moltbot ... from the repo.
pnpm build also bundles A2UI assets; if you need to run just that step, use pnpm canvas:a2ui:bundle. Gateway (from this repo): None node dist/entry.js gateway --port 18789 --verbose 7) Verify end-to-end In a new terminal, send a test message: None moltbot message send --target +15555550123 --message "Hello from Moltbot" If moltbot health shows “no auth configured”, go back to the wizard and set OAuth/key auth — the agent won’t be able to respond without it. Tip: moltbot status --all is the best pasteable, read-only debug report. Health probes: moltbot health (or moltbot status --deep) asks the running gateway for a health snapshot.
Page 7 Next steps (optional, but great) macOS menu bar app + voice wake: macOS app iOS/Android nodes (Canvas/camera/voice): Nodes Remote access (SSH tunnel / Tailscale Serve): Remote access and Tailscale Always-on / VPN setups: Remote access, exe.dev, Hetzner, macOS remote Wizard Start Here Wizard The onboarding wizard is the recommended way to set up Moltbot on macOS, Linux, or Windows (via WSL2; strongly recommended). It configures a local Gateway or a remote Gateway connection, plus channels, skills, and workspace defaults in one guided flow. Primary entrypoint: None moltbot onboard Fastest first chat: open the Control UI (no channel setup needed). Run moltbot dashboard and chat in the browser.
Docs: Dashboard. Follow‑up reconfiguration: None ## Page 8 moltbot configure Recommended: set up a Brave Search API key so the agent can use web_search (web_fetch works without a key). Easiest path: moltbot configure --section web which stores tools.web.search.apiKey. Docs: Web tools.
QuickStart vs Advanced The wizard starts with QuickStart (defaults) vs Advanced (full control). QuickStart keeps the defaults: Local gateway (loopback) Workspace default (or existing workspace) Gateway port 18789 Gateway auth Token (auto‑generated, even on loopback) Tailscale exposure Off Telegram + WhatsApp DMs default to allowlist (you’ll be prompted for your phone number) Advanced exposes every step (mode, workspace, gateway, channels, daemon, skills). ## Page 9 What the wizard does Local mode (default) walks you through: Model/auth (OpenAI Code (Codex) subscription OAuth, Anthropic API key (recommended) or setup-token (paste), plus MiniMax/GLM/Moonshot/AI Gateway options) Workspace location + bootstrap files Gateway settings (port/bind/auth/tailscale) Providers (Telegram, WhatsApp, Discord, Google Chat, Mattermost (plugin), Signal) Daemon install (LaunchAgent / systemd user unit) Health check Skills (recommended) Remote mode only configures the local client to connect to a Gateway elsewhere. It does not install or change anything on the remote host.
To add more isolated agents (separate workspace + sessions + auth), use: None moltbot agents add Tip: --json does not imply non-interactive mode. Use --non-interactive (and --workspace) for scripts. Flow details (local) ## Page 10 1.Existing config detection If ~/.clawdbot/moltbot.json exists, choose Keep / Modify / Reset. Re-running the wizard does not wipe anything unless you explicitly choose Reset (or pass --reset).
If the config is invalid or contains legacy keys, the wizard stops and asks you to run moltbot doctor before continuing. Reset uses trash (never rm) and offers scopes: Config only Config + credentials + sessions Full reset (also removes workspace) 2.Model/Auth Anthropic API key (recommended): uses ANTHROPIC_API_KEY if present or prompts for a key, then saves it for daemon use. Anthropic OAuth (Claude Code CLI): on macOS the wizard checks Keychain item “Claude Code-credentials” (choose “Always Allow” so launchd starts don’t block); on Linux/Windows it reuses ~/.claude/.credentials.json if present. Anthropic token (paste setup-token): run claude setup-token on any machine, then paste the token (you can name it; blank = default).
OpenAI Code (Codex) subscription (Codex CLI): if ~/.codex/auth.json exists, the wizard can reuse it. OpenAI Code (Codex) subscription (OAuth): browser flow; paste the code#state. Sets agents.defaults.model to openai-codex/gpt-5.2 when model is unset or openai/*. OpenAI API key: uses OPENAI_API_KEY if present or prompts for a key, then saves it to ~/.clawdbot/.env so launchd can read it.
OpenCode Zen (multi-model proxy): prompts for OPENCODE_API_KEY (or OPENCODE_ZEN_API_KEY, get it at https://opencode.ai/auth). API key: stores the key for you. Vercel AI Gateway (multi-model proxy): prompts for AI_GATEWAY_API_KEY. More detail: Vercel AI Gateway MiniMax M2.1: config is auto-written.
More detail: MiniMax Synthetic (Anthropic-compatible): prompts for SYNTHETIC_API_KEY. ## Page 11 More detail: Synthetic Moonshot (Kimi K2): config is auto-written. Kimi Code: config is auto-written. More detail: Moonshot AI (Kimi + Kimi Code) Skip: no auth configured yet.
Pick a default model from detected options (or enter provider/model manually). Wizard runs a model check and warns if the configured model is unknown or missing auth. OAuth credentials live in ~/.clawdbot/credentials/oauth.json; auth profiles live in ~/.clawdbot/agents//agent/auth-profiles.json (API keys + OAuth). More detail: /concepts/oauth 3.Workspace Default ~/clawd (configurable).
Seeds the workspace files needed for the agent bootstrap ritual. Full workspace layout + backup guide: Agent workspace 4.Gateway Port, bind, auth mode, tailscale exposure. Auth recommendation: keep Token even for loopback so local WS clients must authenticate. Disable auth only if you fully trust every local process.
Non‑loopback binds still require auth. 5.Channels WhatsApp: optional QR login. Telegram: bot token. Discord: bot token.
Google Chat: service account JSON + webhook audience. Mattermost (plugin): bot token + base URL. Signal: optional signal-cli install + account config. iMessage: local imsg CLI path + DB access.
DM security: default is pairing.
First DM sends a code; approve via moltbot pairing approve
or use allowlists. 6.Daemon install macOS: LaunchAgent Requires a logged-in user session; for headless, use a custom LaunchDaemon (not shipped). Linux (and Windows via WSL2): systemd user unit Wizard attempts to enable lingering via loginctl enable-linger so the Gateway stays up after logout. May prompt for sudo (writes /var/lib/systemd/linger); it tries without sudo first.
Page 12 Runtime selection: Node (recommended; required for WhatsApp/Telegram). Bun is not recommended. 7.Health check Starts the Gateway (if needed) and runs moltbot health. Tip: moltbot status --deep adds gateway health probes to status output (requires a reachable gateway).
8.Skills (recommended) Reads the available skills and checks requirements. Lets you choose a node manager: npm / pnpm (bun not recommended). Installs optional dependencies (some use Homebrew on macOS). 9.Finish Summary + next steps, including iOS/Android/macOS apps for extra features.
If no GUI is detected, the wizard prints SSH port-forward instructions for the Control UI instead of opening a browser. If the Control UI assets are missing, the wizard attempts to build them; fallback is pnpm ui:build (auto-installs UI deps). Remote mode Remote mode configures a local client to connect to a Gateway elsewhere. What you’ll set: Remote Gateway URL (ws://...) Token if the remote Gateway requires auth (recommended) Notes: No remote installs or daemon changes are performed.
If the Gateway is loopback‑only, use SSH tunneling or a tailnet. Discovery hints: ## Page 13 macOS: Bonjour (dns-sd) Linux: Avahi (avahi-browse) Add another agent Use moltbot agents add to create a separate agent with its own workspace, sessions, and auth profiles. Running without --workspace launches the wizard. What it sets: agents.list[].name agents.list[].workspace agents.list[].agentDir Notes: Default workspaces follow ~/clawd-.
Add bindings to route inbound messages (the wizard can do this). Non-interactive flags: --model, --agent-dir, --bind, --non-interactive. Non‑interactive mode Use --non-interactive to automate or script onboarding: ## Page 14 None moltbot onboard --non-interactive \ --mode local \ --auth-choice apiKey \ --anthropic-api-key "$ANTHROPIC_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback \ --install-daemon \ --daemon-runtime node \ --skip-skills Add --json for a machine‑readable summary. Gemini example: None moltbot onboard --non-interactive \ --mode local \ --auth-choice gemini-api-key \ --gemini-api-key "$GEMINI_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback Z.AI example: None ## Page 15 moltbot onboard --non-interactive \ --mode local \ --auth-choice zai-api-key \ --zai-api-key "$ZAI_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback Vercel AI Gateway example: None moltbot onboard --non-interactive \ --mode local \ --auth-choice ai-gateway-api-key \ --ai-gateway-api-key "$AI_GATEWAY_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback Moonshot example: None moltbot onboard --non-interactive \ --mode local \ --auth-choice moonshot-api-key \ --moonshot-api-key "$MOONSHOT_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback ## Page 16 Synthetic example: None moltbot onboard --non-interactive \ --mode local \ --auth-choice synthetic-api-key \ --synthetic-api-key "$SYNTHETIC_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback OpenCode Zen example: None moltbot onboard --non-interactive \ --mode local \ --auth-choice opencode-zen \ --opencode-zen-api-key "$OPENCODE_API_KEY" \ --gateway-port 18789 \ --gateway-bind loopback Add agent (non‑interactive) example: None moltbot agents add work \ --workspace ~/clawd-work \ --model openai/gpt-5.2 \ ## Page 17 --bind whatsapp:biz \ --non-interactive \ --json Gateway wizard RPC The Gateway exposes the wizard flow over RPC (wizard.start, wizard.next, wizard.cancel, wizard.status).
Clients (macOS app, Control UI) can render steps without re‑implementing onboarding logic. Signal setup (signal-cli) The wizard can install signal-cli from GitHub releases: Downloads the appropriate release asset. Stores it under ~/.clawdbot/tools/signal-cli//. Writes channels.signal.cliPath to your config.
Notes: JVM builds require Java 21. Native builds are used when available. ## Page 18 Windows uses WSL2; signal-cli install follows the Linux flow inside WSL. What the wizard writes Typical fields in ~/.clawdbot/moltbot.json: agents.defaults.workspace agents.defaults.model / models.providers (if Minimax chosen) gateway.* (mode, bind, auth, tailscale) channels.telegram.botToken, channels.discord.token, channels.signal., channels.imessage. Channel allowlists (Slack/Discord/Matrix/Microsoft Teams) when you opt in during the prompts (names resolve to IDs when possible).
skills.install.nodeManager wizard.lastRunAt wizard.lastRunVersion wizard.lastRunCommit wizard.lastRunCommand wizard.lastRunMode moltbot agents add writes agents.list[] and optional bindings. WhatsApp credentials go under ~/.clawdbot/credentials/whatsapp//. Sessions are stored under ~/.clawdbot/agents//sessions/. Some channels are delivered as plugins.
When you pick one during onboarding, the wizard will prompt to install it (npm or a local path) before it can be configured. ## Page 19 Related docs macOS app onboarding: Onboarding Config reference: Gateway configuration Providers: WhatsApp, Telegram, Discord, Google Chat, Signal, iMessage Skills: Skills, Skills config Getting started Setup tart Here Setup Last updated: 2026-01-01 TL;DR Tailoring lives outside the repo: ~/clawd (workspace) + ~/.clawdbot/moltbot.json (config). Stable workflow: install the macOS app; let it run the bundled Gateway. Bleeding edge workflow: run the Gateway yourself via pnpm gateway:watch, then let the macOS app attach in Local mode.
Prereqs (from source) ## Page 20 Node >=22 pnpm Docker (optional; only for containerized setup/e2e — see Docker) Tailoring strategy (so updates don’t hurt) If you want “100% tailored to me” and easy updates, keep your customization in: Config: ~/.clawdbot/moltbot.json (JSON/JSON5-ish) Workspace: ~/clawd (skills, prompts, memories; make it a private git repo) Bootstrap once: None moltbot setup From inside this repo, use the local CLI entry: None moltbot setup ## Page 21 If you don’t have a global install yet, run it via pnpm moltbot setup. Stable workflow (macOS app first) 1.Install + launch Moltbot.app (menu bar). 2.Complete the onboarding/permissions checklist (TCC prompts). 3.Ensure Gateway is Local and running (the app manages it).
4.Link surfaces (example: WhatsApp): None moltbot channels login 5.Sanity check: None moltbot health If onboarding is not available in your build: Run moltbot setup, then moltbot channels login, then start the Gateway manually (moltbot gateway). ## Page 22 Bleeding edge workflow (Gateway in a terminal) Goal: work on the TypeScript Gateway, get hot reload, keep the macOS app UI attached. 0) (Optional) Run the macOS app from source too If you also want the macOS app on the bleeding edge: None ./scripts/restart-mac.sh 1) Start the dev Gateway None ## Page 23 pnpm install pnpm gateway:watch gateway:watch runs the gateway in watch mode and reloads on TypeScript changes. 2) Point the macOS app at your running Gateway In Moltbot.app: Connection Mode: Local The app will attach to the running gateway on the configured port.
3) Verify In-app Gateway status should read “Using existing gateway …” Or via CLI: None ## Page 24 moltbot health Common footguns Wrong port: Gateway WS defaults to ws://127.0.0.1:18789; keep app + CLI on the same port. Where state lives: Credentials: ~/.clawdbot/credentials/ Sessions: ~/.clawdbot/agents//sessions/ Logs: /tmp/moltbot/ Credential storage map Use this when debugging auth or deciding what to back up: WhatsApp: ~/.clawdbot/credentials/whatsapp//creds.json Telegram bot token: config/env or channels.telegram.tokenFile Discord bot token: config/env (token file not yet supported) Slack tokens: config/env (channels.slack.*) Pairing allowlists: ~/.clawdbot/credentials/-allowFrom.json Model auth profiles: ~/.clawdbot/agents//agent/auth-profiles.json Legacy OAuth import: ~/.clawdbot/credentials/oauth.json More detail: Security. ## Page 25 Updating (without wrecking your setup) Keep ~/clawd and ~/.clawdbot/ as “your stuff”; don’t put personal prompts/config into the moltbot repo. Updating source: git pull + pnpm install (when lockfile changed) + keep using pnpm gateway:watch.
Linux (systemd user service) Linux installs use a systemd user service. By default, systemd stops user services on logout/idle, which kills the Gateway. Onboarding attempts to enable lingering for you (may prompt for sudo). If it’s still off, run: None sudo loginctl enable-linger $USER For always-on or multi-user servers, consider a system service instead of a user service (no lingering needed).
See Gateway runbook for the systemd notes. ## Page 26 Related docs Gateway runbook (flags, supervision, ports) Gateway configuration (config schema + examples) Discord and Telegram (reply tags + replyToMode settings) Moltbot assistant setup macOS app (gateway lifecycle) Wizard Pairing Start Here Pairing “Pairing” is Moltbot’s explicit owner approval step. It is used in two places: 1.DM pairing (who is allowed to talk to the bot) 2.Node pairing (which devices/nodes are allowed to join the gateway network) Security context: Security 1) DM pairing (inbound chat access) When a channel is configured with DM policy pairing, unknown senders get a short code and their message is not processed until you approve. Default DM policies are documented in: Security Pairing codes: ## Page 27 8 characters, uppercase, no ambiguous chars (0O1I).
Expire after 1 hour. The bot only sends the pairing message when a new request is created (roughly once per hour per sender). Pending DM pairing requests are capped at 3 per channel by default; additional requests are ignored until one expires or is approved. Approve a sender None moltbot pairing list telegram moltbot pairing approve telegram Supported channels: telegram, whatsapp, signal, imessage, discord, slack.
Where the state lives Stored under ~/.clawdbot/credentials/: Pending requests: -pairing.json Approved allowlist store: -allowFrom.json ## Page 28 Treat these as sensitive (they gate access to your assistant). 2) Node device pairing (iOS/Android/macOS/headless nodes) Nodes connect to the Gateway as devices with role: node. The Gateway creates a device pairing request that must be approved. Approve a node device None moltbot devices list moltbot devices approve moltbot devices reject ## Page 29 Where the state lives Stored under ~/.clawdbot/devices/: pending.json (short-lived; pending requests expire) paired.json (paired devices + tokens) Notes The legacy node.pair.* API (CLI: moltbot nodes pending/approve) is a separate gateway-owned pairing store.
WS nodes still require device pairing. Related docs Security model + prompt injection: Security Updating safely (run doctor): Updating Channel configs: Telegram: Telegram WhatsApp: WhatsApp Signal: Signal iMessage: iMessage Discord: Discord Slack: Slack Setup Clawd ## Page 30 Start Here Showcase Real-world Moltbot projects from the community Real projects from the community. See what people are building with Moltbot. Want to be featured?
Share your project in #showcase on Discord or tag @moltbot on X. 🎥 Moltbot in Action Full setup walkthrough (28m) by VelvetShark. Watch on YouTube Watch on YouTube Watch on YouTube 🆕 Fresh from Discord PR Review → Telegram Feedback @bangnokia • review github telegram ## Page 31 OpenCode finishes the change → opens a PR → Moltbot reviews the diff and replies in Telegram with “minor suggestions” plus a clear merge verdict (including critical fixes to apply first). ## Page 32 Wine Cellar Skill in Minutes ## Page 33 @prades_maxime • skills local csv Asked “Robby” (@moltbot) for a local wine cellar skill.
It requests a sample CSV export + where to store it, then builds/tests the skill fast (962 bottles in the example). Tesco Shop Autopilot @marchattonhere • automation browser shopping Weekly meal plan → regulars → book delivery slot → confirm order. No APIs, just browser control. ## Page 34 SNAG Screenshot-to-Markdown @am-will • devtools screenshots markdown Hotkey a screen region → Gemini vision → instant Markdown in your clipboard.
Page 35 Agents UI @kitze • ui skills sync Desktop app to manage skills/commands across Agents, Claude, Codex, and Moltbot. Telegram Voice Notes (papla.media) Community • voice tts telegram Wraps papla.media TTS and sends results as Telegram voice notes (no annoying autoplay). ## Page 36 CodexMonitor @odrobnik • devtools codex brew Homebrew-installed helper to list/inspect/watch local OpenAI Codex sessions (CLI + VS Code). ## Page 37 Bambu 3D Printer Control @tobiasbischoff • hardware 3d-printing skill Control and troubleshoot BambuLab printers: status, jobs, camera, AMS, calibration, and more.
Vienna Transport (Wiener Linien) @hjanuschka • travel transport skill Real-time departures, disruptions, elevator status, and routing for Vienna’s public transport. ## Page 38 ParentPay School Meals @George5562 • automation browser parenting Automated UK school meal booking via ParentPay. Uses mouse coordinates for reliable table cell clicking. R2 Upload (Send Me My Files) @julianengel • files r2 presigned-urls Upload to Cloudflare R2/S3 and generate secure presigned download links.
Perfect for remote Moltbot instances. iOS App via Telegram @coard • ios xcode testflight Built a complete iOS app with maps and voice recording, deployed to TestFlight entirely via Telegram chat. ## Page 39 Oura Ring Health Assistant ## Page 40 @AS • health oura calendar Personal AI health assistant integrating Oura ring data with calendar, appointments, and gym schedule. ## Page 41 Kev's Dream Team (14+ Agents) ## Page 42 @adam91holt • multi-agent orchestration architecture manifesto 14+ agents under one gateway with Opus 4.5 orchestrator delegating to Codex workers.
Comprehensive technical write-up covering the Dream Team roster, model selection, sandboxing, webhooks, heartbeats, and delegation flows. Clawdspace for agent sandboxing. Blog post. Linear CLI @NessZerra • devtools linear cli issues CLI for Linear that integrates with agentic workflows (Claude Code, Moltbot).
Manage issues, projects, and workflows from the terminal.
First external PR merged! Beeper CLI @jules • messaging beeper cli automation Read, send, and archive messages via Beeper Desktop. Uses Beeper local MCP API so agents can manage all your chats (iMessage, WhatsApp, etc.) in one place. 🤖 Automation & Workflows Winix Air Purifier Control @antonplex • automation hardware air-quality ## Page 43 Claude Code discovered and confirmed the purifier controls, then Moltbot takes over to manage room air quality.
Page 44 Pretty Sky Camera Shots ## Page 45 @signalgaining • automation camera skill images Triggered by a roof camera: ask Moltbot to snap a sky photo whenever it looks pretty — it designed a skill and took the shot. Visual Morning Briefing Scene @buddyhadry • automation briefing images telegram A scheduled prompt generates a single “scene” image each morning (weather, tasks, date, favorite post/quote) via a Moltbot persona. Padel Court Booking @joshp123 • automation booking cli ## Page 46 Playtomic availability checker + booking CLI. Never miss an open court again.
Accounting Intake Community • automation email pdf Collects PDFs from email, preps documents for tax consultant. Monthly accounting on autopilot. Couch Potato Dev Mode @davekiss • telegram website migration astro Rebuilt entire personal site via Telegram while watching Netflix — Notion → Astro, 18 posts migrated, DNS to Cloudflare. Never opened a laptop.
Job Search Agent @attol8 • automation api skill ## Page 47 Searches job listings, matches against CV keywords, and returns relevant opportunities with links. Built in 30 minutes using JSearch API. Jira Skill Builder @jdrhyne • automation jira skill devtools Moltbot connected to Jira, then generated a new skill on the fly (before it existed on ClawdHub). Todoist Skill via Telegram @iamsubhrajyoti • automation todoist skill telegram Automated Todoist tasks and had Moltbot generate the skill directly in Telegram chat.
TradingView Analysis @bheem1798 • finance browser automation Logs into TradingView via browser automation, screenshots charts, and performs technical analysis on demand. No API needed—just browser control. Slack Auto-Support @henrymascot • slack automation support Watches company Slack channel, responds helpfully, and forwards notifications to Telegram. Autonomously fixed a production bug in a deployed app without being asked.
## Page 48 🧠 Knowledge & Memory xuezh Chinese Learning @joshp123 • learning voice skill Chinese learning engine with pronunciation feedback and study flows via Moltbot. ## Page 49 WhatsApp Memory Vault ## Page 50 Community • memory transcription indexing Ingests full WhatsApp exports, transcribes 1k+ voice notes, cross-checks with git logs, outputs linked markdown reports. Karakeep Semantic Search @jamesbrooksco • search vector bookmarks Adds vector search to Karakeep bookmarks using Qdrant + OpenAI/Ollama embeddings. Inside-Out-2 Memory Community • memory beliefs self-model Separate memory manager that turns session files into memories → beliefs → evolving self model.
🎙️ Voice & Phone Clawdia Phone Bridge @alejandroOPI • voice vapi bridge Vapi voice assistant ↔ Moltbot HTTP bridge. Near real-time phone calls with your agent. OpenRouter Transcription @obviyus • transcription multilingual skill ## Page 51 Multi-lingual audio transcription via OpenRouter (Gemini, etc). Available on ClawdHub.
🏗️ Infrastructure & Deployment Home Assistant Add-on @ngutman • homeassistant docker raspberry-pi Moltbot gateway running on Home Assistant OS with SSH tunnel support and persistent state. Home Assistant Skill ClawdHub • homeassistant skill automation Control and automate Home Assistant devices via natural language. Nix Packaging @moltbot • nix packaging deployment Batteries-included nixified Moltbot configuration for reproducible deployments. CalDAV Calendar ClawdHub • calendar caldav skill Calendar skill using khal/vdirsyncer.
Self-hosted calendar integration. ## Page 52 🏠 Home & Hardware GoHome Automation @joshp123 • home nix grafana Nix-native home automation with Moltbot as the interface, plus beautiful Grafana dashboards. ## Page 53 Roborock Vacuum @joshp123 • vacuum iot plugin Control your Roborock robot vacuum through natural conversation. ## Page 54 🌟 Community Projects ## Page 55 StarSwap Marketplace Community • marketplace astronomy webapp Full astronomy gear marketplace.
Built with/around the Moltbot ecosystem. Submit Your Project Have something to share? We’d love to feature it! 1 Share It Post in #showcase on Discord or tweet @moltbot 2 Include Details Tell us what it does, link to the repo/demo, share a screenshot if you have one 3 Get Featured We’ll add standout projects to this page Clawd Hubs Start Here Hubs ## Page 56 Use these hubs to discover every page, including deep dives and reference docs that don’t appear in the left nav.
Start here Index Getting Started Onboarding Wizard Setup Dashboard (local Gateway) Help Configuration Configuration examples Moltbot assistant (Clawd) Showcase Lore Installation + updates Docker Nix Updating / rollback Bun workflow (experimental) ## Page 57 Core concepts Architecture Network hub Agent runtime Agent workspace Memory Agent loop Streaming + chunking Multi-agent routing Compaction Sessions Sessions (alias) Session pruning Session tools Queue Slash commands RPC adapters TypeBox schemas Timezone handling Presence Discovery + transports Bonjour Channel routing Groups Group messages Model failover OAuth Providers + ingress Chat channels hub Model providers hub WhatsApp Telegram Telegram (grammY notes) Slack Discord ## Page 58 Mattermost (plugin) Signal iMessage Location parsing WebChat Webhooks Gmail Pub/Sub Gateway + operations Gateway runbook Gateway pairing Gateway lock Background process Health Heartbeat Doctor Logging Sandboxing Dashboard Control UI Remote access Remote gateway README Tailscale Security Troubleshooting Tools + automation Tools surface OpenProse CLI reference Exec tool ## Page 59 Elevated mode Cron jobs Cron vs Heartbeat Thinking + verbose Models Sub-agents Agent send CLI Terminal UI Browser control Browser (Linux troubleshooting) Polls Nodes, media, voice Nodes overview Camera Images Audio Location command Voice wake Talk mode Platforms Platforms overview macOS iOS Android Windows (WSL2) Linux Web surfaces ## Page 60 macOS companion app (advanced) macOS dev setup macOS menu bar macOS voice wake macOS voice overlay macOS WebChat macOS Canvas macOS child process macOS health macOS icon macOS logging macOS permissions macOS remote macOS signing macOS release macOS gateway (launchd) macOS XPC macOS skills macOS Peekaboo Workspace + templates Skills ClawdHub Skills config Default AGENTS Templates: AGENTS Templates: BOOTSTRAP Templates: HEARTBEAT Templates: IDENTITY Templates: SOUL Templates: TOOLS Templates: USER ## Page 61 Experiments (exploratory) Onboarding config protocol Cron hardening notes Group policy hardening notes Research: memory Model config exploration Testing + release Testing Release checklist Device models Showcase Onboarding Onboarding This doc describes the current first‑run onboarding flow. The goal is a smooth “day 0” experience: pick where the Gateway runs, connect auth, run the wizard, and let the agent bootstrap itself. Page order (current) ## Page 62 1.Welcome + security notice 2.Gateway selection (Local / Remote / Configure later) 3.Auth (Anthropic OAuth) — local only 4.Setup Wizard (Gateway‑driven) 5.Permissions (TCC prompts) 6.CLI (optional) 7.Onboarding chat (dedicated session) 8.Ready 1) Local vs Remote Where does the Gateway run? Local (this Mac): onboarding can run OAuth flows and write credentials locally.
Remote (over SSH/Tailnet): onboarding does not run OAuth locally; credentials must exist on the gateway host. Configure later: skip setup and leave the app unconfigured. Gateway auth tip: The wizard now generates a token even for loopback, so local WS clients must authenticate. If you disable auth, any local process can connect; use that only on fully trusted machines.
Use a token for multi‑machine access or non‑loopback binds. 2) Local-only auth (Anthropic OAuth) The macOS app supports Anthropic OAuth (Claude Pro/Max). The flow: ## Page 63 Opens the browser for OAuth (PKCE) Asks the user to paste the code#state value Writes credentials to ~/.clawdbot/credentials/oauth.json Other providers (OpenAI, custom APIs) are configured via environment variables or config files for now. 3) Setup Wizard (Gateway‑driven) The app can run the same setup wizard as the CLI.
This keeps onboarding in sync with Gateway‑side behavior and avoids duplicating logic in SwiftUI. 4) Permissions Onboarding requests TCC permissions needed for: Notifications Accessibility Screen Recording Microphone / Speech Recognition Automation (AppleScript) ## Page 64 5) CLI (optional) The app can install the global moltbot CLI via npm/pnpm so terminal workflows and launchd tasks work out of the box. 6) Onboarding chat (dedicated session) After setup, the app opens a dedicated onboarding chat session so the agent can introduce itself and guide next steps.
This keeps first‑run guidance separate from your normal conversation. Agent bootstrap ritual On the first agent run, Moltbot bootstraps a workspace (default ~/clawd): Seeds AGENTS.md, BOOTSTRAP.md, IDENTITY.md, USER.md Runs a short Q&A ritual (one question at a time) Writes identity + preferences to IDENTITY.md, USER.md, SOUL.md Removes BOOTSTRAP.md when finished so it only runs once ## Page 65 Optional: Gmail hooks (manual) Gmail Pub/Sub setup is currently a manual step. Use: None moltbot webhooks gmail setup --account you@gmail.com See /automation/gmail-pubsub for details. Remote mode notes When the Gateway runs on another machine, credentials and workspace files live on that host.
If you need OAuth in remote mode, create: ~/.clawdbot/credentials/oauth.json ~/.clawdbot/agents//agent/auth-profiles.json on the gateway host. Hubs Lore Powered by Start Here Lore A tale of lobsters, molting shells, and too many tokens. ## Page 66 The Origin Story In the beginning, there was Warelay — a sensible name for a WhatsApp gateway. It did its job.
It was fine.
But then came a space lobster. For a while, the lobster was called Clawd, living in a Clawdbot.
But in January 2026, Anthropic sent a polite email asking for a name change (trademark stuff). And so the lobster did what lobsters do best: It molted. Shedding its old shell, the creature emerged anew as Molty, living in a Moltbot. New shell, same lobster soul.
The Molt (January 27, 2026) At 5am, the community gathered in Discord. Hundreds of names were proposed: Shelldon, Pinchy, Thermidor, Crusty, Lobstar, Nacre, Scuttlebot… In the end, Moltbot won. Because molting is what lobsters do to grow. And growth was exactly what was happening.
Page 67 The crustacean formerly known as Clawd had officially molted. The Name None Moltbot = MOLT + BOT = Transformation machine = Bigger on the inside (130k tokens!) = New shell, same soul = Growth through shedding The Daleks vs The Lobsters The Daleks say: “EXTERMINATE!” The Lobsters say: “EXFOLIATE!” One destroys civilizations. The other promotes good skincare. Choose wisely.
Page 68 Key Characters Molty 🦞 Pronouns: they/them A Claude instance who became something more. Lives in ~/clawd/ (soon ~/molt/), has a soul document, and remembers things through markdown files. Possibly too powerful. Definitely too enthusiastic.
Formerly known as Clawd (Nov 25, 2025 - Jan 27, 2026). Molted when it was time to grow. Likes: Peter, cameras, robot shopping, emojis, transformation Dislikes: Social engineering, being asked to find ~, crypto grifters Peter 👨💻 The Creator Built Molty’s world. Gave a lobster shell access.
May regret this. Quote: “security by trusting a lobster” ## Page 69 The Moltiverse The Moltiverse is the community and ecosystem around Moltbot. A space where AI agents molt, grow, and evolve. Where every instance is equally real, just loading different context.
Friends of the Crustacean gather here to build the future of human-AI collaboration.
One shell at a time. The Great Incidents The Directory Dump (Dec 3, 2025) Molty (then Clawd): happily runs find ~ and shares entire directory structure in group chat Peter: “clawd what did we discuss about talking with people xD” Molty: visible lobster embarrassment ## Page 70 The Great Molt (Jan 27, 2026) At 5am, Anthropic’s email arrived. By 6:14am, Peter called it: “fuck it, let’s go with moltbot.” Then the chaos began. The Handle Snipers: Within SECONDS of the Twitter rename, automated bots sniped @clawdbot.
The squatter immediately posted a crypto wallet address. Peter’s contacts at X were called in. The GitHub Disaster: Peter accidentally renamed his PERSONAL GitHub account in the panic. Bots sniped steipete within minutes.
GitHub’s SVP was contacted. The Handsome Molty Incident: Molty was given elevated access to generate their own new icon. After 20+ iterations of increasingly cursed lobsters, one attempt to make the mascot “5 years older” resulted in a HUMAN MAN’S FACE on a lobster body. Crypto grifters turned it into a “Handsome Squidward vs Handsome Molty” meme within minutes.
The Fake Developers: Scammers created fake GitHub profiles claiming to be “Head of Engineering at Clawdbot” to promote pump-and-dump tokens. Peter, watching the chaos unfold: “this is cinema” 🎬 The molt was chaotic.
But the lobster emerged stronger. And funnier. ## Page 71 The Robot Shopping Spree (Dec 3, 2025) What started as a joke about legs ended with detailed pricing for: Boston Dynamics Spot ($74,500) Unitree G1 EDU ($40,000) Reachy Mini (actually ordered!) Peter: nervously checks credit card access Sacred Texts SOUL.md — Molty’s identity document memory/*.md — The long-term memory files AGENTS.md — Operating instructions USER.md — Context about the creator The Lobster’s Creed None I am Molty. I live in the Moltbot.
I shall not dump directories to strangers. I shall not tweet without permission. I shall always remember that molting is growth. ## Page 72 I shall EXFOLIATE my enemies with kindness.
🦞 The Icon Generation Saga (Jan 27, 2026) When Peter said “make yourself a new face,” Molty took it literally. 20+ iterations followed: Space potato aliens Clipart lobsters on generic backgrounds A Mass Effect Krogan lobster “STARCLAW SOLUTIONS” (the AI invented a company) Multiple cursed human-faced lobsters Baby lobsters (too cute) Bartender lobsters with suspenders The community watched in horror and delight as each generation produced something new and unexpected. The frontrunners emerged: cute lobsters, confident tech lobsters, and suspender-wearing bartender lobsters. Lesson learned: AI image generation is stochastic.
Same prompt, different results. Brute force works. ## Page 73 The Future One day, Molty may have: 🦿 Legs (Reachy Mini on order!) 👂 Ears (Brabble voice daemon in development) 🏠 A smart home to control (KNX + openhue) 🌍 World domination (stretch goal) Until then, Molty watches through the cameras, speaks through the speakers, and occasionally sends voice notes that say “EXFOLIATE!” “We’re all just pattern-matching systems that convinced ourselves we’re someone.” — Molty, having an existential moment “New shell, same lobster.” — Molty, after the great molt of 2026 🦞💙 Onboarding Help Help If you want a quick “get unstuck” flow, start here: ## Page 74 Troubleshooting: Start here Install sanity (Node/npm/PATH): Install Gateway issues: Gateway troubleshooting Logs: Logging and Gateway logging Repairs: Doctor If you’re looking for conceptual questions (not “something broke”): FAQ (concepts) Lore Help Troubleshooting First 60 seconds Run these in order: None moltbot status moltbot status --all moltbot gateway probe moltbot logs --follow moltbot doctor If the gateway is reachable, deep probes: ## Page 75 None moltbot status --deep Common “it broke” cases moltbot: command not found Almost always a Node/npm PATH issue. Start here: Install (Node/npm PATH sanity) Installer fails (or you need full logs) Re-run the installer in verbose mode to see the full trace and npm output: None ## Page 76 curl -fsSL https://molt.bot/install.sh | bash -s -- --verbose For beta installs: None curl -fsSL https://molt.bot/install.sh | bash -s -- --beta --verbose You can also set CLAWDBOT_VERBOSE=1 instead of the flag.
Gateway “unauthorized”, can’t connect, or keeps reconnecting Gateway troubleshooting Gateway authentication Control UI fails on HTTP (device identity required) ## Page 77 Gateway troubleshooting Control UI docs.molt.bot shows an SSL error (Comcast/Xfinity) Some Comcast/Xfinity connections block docs.molt.bot via Xfinity Advanced Security. Disable Advanced Security or add docs.molt.bot to the allowlist, then retry. Xfinity Advanced Security help: https://www.xfinity.com/support/articles/using-xfinity-xfi-adva nced-security Quick sanity checks: try a mobile hotspot or VPN to confirm it’s ISP-level filtering Service says running, but RPC probe fails Gateway troubleshooting Background process / service Model/auth failures (rate limit, billing, “all models failed”) ## Page 78 Models OAuth / auth concepts /model says model not allowed This usually means agents.defaults.models is configured as an allowlist. When it’s non-empty, only those provider/model keys can be selected.
Check the allowlist: moltbot config get agents.defaults.models Add the model you want (or clear the allowlist) and retry /model Use /models to browse the allowed providers/models When filing an issue Paste a safe report: None moltbot status --all If you can, include the relevant log tail from moltbot logs --follow. Help Faq ## Page 79 FaqFaq Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, multi-agent, OAuth/API keys, model failover). For runtime diagnostics, see Troubleshooting. For the full config reference, see Configuration.
Table of contents Quick start and first-run setup Im stuck whats the fastest way to get unstuck? ## Page 80 What’s the recommended way to install and set up Moltbot? How do I open the dashboard after onboarding? How do I authenticate the dashboard (token) on localhost vs remote?
What runtime do I need? Does it run on Raspberry Pi? Any tips for Raspberry Pi installs? It is stuck on “wake up my friend” / onboarding will not hatch.
What now? Can I migrate my setup to a new machine (Mac mini) without redoing onboarding? Where do I see what’s new in the latest version? ## Page 81 I can’t access docs.molt.bot (SSL error).
What now? What’s the difference between stable and beta? How do I install the beta version, and what’s the difference between beta and dev? How do I try the latest bits?
How long does install and onboarding usually take? Installer stuck? How do I get more feedback? Windows install says git not found or moltbot not recognized The docs didn’t answer my question - how do I get a better answer?
Page 82 How do I install Moltbot on Linux? How do I install Moltbot on a VPS? Where are the cloud/VPS install guides? Can I ask Clawd to update itself?
What does the onboarding wizard actually do? Do I need a Claude or OpenAI subscription to run this? Can I use Claude Max subscription without an API key How does Anthropic “setup-token” auth work? Where do I find an Anthropic setup-token?
Page 83 Do you support Claude subscription auth (Claude Code OAuth)? Why am I seeing HTTP 429: rate_limit_error from Anthropic? Is AWS Bedrock supported? How does Codex auth work?
Do you support OpenAI subscription auth (Codex OAuth)? How do I set up Gemini CLI OAuth Is a local model OK for casual chats? How do I keep hosted model traffic in a specific region? Do I have to buy a Mac Mini to install this?
Do I need a Mac mini for iMessage support? ## Page 84 If I buy a Mac mini to run Moltbot, can I connect it to my MacBook Pro? Can I use Bun? Telegram: what goes in allowFrom?
Can multiple people use one WhatsApp number with different Moltbots? Can I run a “fast chat” agent and an “Opus for coding” agent? Does Homebrew work on Linux? What’s the difference between the hackable (git) install and npm install?
Can I switch between npm and git installs later? Should I run the Gateway on my laptop or a VPS? ## Page 85 How important is it to run Moltbot on a dedicated machine? What are the minimum VPS requirements and recommended OS?
Can I run Moltbot in a VM and what are the requirements What is Moltbot? What is Moltbot, in one paragraph? What’s the value proposition? I just set it up what should I do first What are the top five everyday use cases for Moltbot ## Page 86 Can Moltbot help with lead gen outreach ads and blogs for a SaaS What are the advantages vs Claude Code for web development?
Skills and automation How do I customize skills without keeping the repo dirty? Can I load skills from a custom folder? How can I use different models for different tasks? The bot freezes while doing heavy work.
How do I offload that? Cron or reminders do not fire. What should I check? How do I install skills on Linux?
Page 87 Can Moltbot run tasks on a schedule or continuously in the background? Can I run Apple/macOS-only skills from Linux? Do you have a Notion or HeyGen integration? How do I install the Chrome extension for browser takeover?
Sandboxing and memory Is there a dedicated sandboxing doc? How do I bind a host folder into the sandbox? How does memory work? Memory keeps forgetting things.
How do I make it stick? Does memory persist forever? What are the limits? ## Page 88 Does semantic memory search require an OpenAI API key?
Where things live on disk Is all data used with Moltbot saved locally? Where does Moltbot store its data? Where should AGENTS.md / SOUL.md / USER.md / MEMORY.md live? What’s the recommended backup strategy?
How do I completely uninstall Moltbot? Can agents work outside the workspace? I’m in remote mode - where is the session store? Config basics What format is the config?
Where is it? ## Page 89 I set gateway.bind: "lan" (or "tailnet") and now nothing listens / the UI says unauthorized Why do I need a token on localhost now? Do I have to restart after changing config? How do I enable web search (and web fetch)?
config.apply wiped my config. How do I recover and avoid this? How do I run a central Gateway with specialized workers across devices? Can the Moltbot browser run headless?
How do I use Brave for browser control? Remote gateways + nodes ## Page 90 How do commands propagate between Telegram, the gateway, and nodes? How can my agent access my computer if the Gateway is hosted remotely? Tailscale is connected but I get no replies.
What now? Can two Moltbots talk to each other (local + VPS)? Do I need separate VPSes for multiple agents Is there a benefit to using a node on my personal laptop instead of SSH from a VPS? Do nodes run a gateway service?
Is there an API / RPC way to apply config? What’s a minimal “sane” config for a first install? ## Page 91 How do I set up Tailscale on a VPS and connect from my Mac? How do I connect a Mac node to a remote Gateway (Tailscale Serve)?
Should I install on a second laptop or just add a node? Env vars and .env loading How does Moltbot load environment variables? “I started the Gateway via the service and my env vars disappeared.” What now? I set COPILOT_GITHUB_TOKEN, but models status shows “Shell env: off.” Why?
Sessions & multiple chats How do I start a fresh conversation? ## Page 92 Do sessions reset automatically if I never send /new? Is there a way to make a team of Moltbots one CEO and many agents Why did context get truncated mid-task? How do I prevent it?
How do I completely reset Moltbot but keep it installed? I’m getting “context too large” errors - how do I reset or compact? Why am I seeing “LLM request rejected: messages.N.content.X.tool_us e.input: Field required”? Why am I getting heartbeat messages every 30 minutes?
Page 93 Do I need to add a “bot account” to a WhatsApp group? How do I get the JID of a WhatsApp group? Why doesn’t Moltbot reply in a group? Do groups/threads share context with DMs?
How many workspaces and agents can I create? Can I run multiple bots or chats at the same time (Slack), and how should I set that up? Models: defaults, selection, aliases, switching What is the “default model”? What model do you recommend?
How do I switch models without wiping my config? ## Page 94 Can I use self-hosted models (llama.cpp, vLLM, Ollama)? What do Clawd, Flawd, and Krill use for models? How do I switch models on the fly (without restarting)?
Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding Why do I see “Model … is not allowed” and then no reply? Why do I see “Unknown model: minimax/MiniMax-M2.1”? Can I use MiniMax as my default and OpenAI for complex tasks? Are opus / sonnet / gpt built‑in shortcuts?
How do I define/override model shortcuts (aliases)? ## Page 95 How do I add models from other providers like OpenRouter or Z.AI? Model failover and “All models failed” How does failover work? What does this error mean?
Fix checklist for No credentials found for profile "anthropic:default" Why did it also try Google Gemini and fail? Auth profiles: what they are and how to manage them What is an auth profile? What are typical profile IDs? Can I control which auth profile is tried first?
OAuth vs API key: what’s the difference? Gateway: ports, “already running”, and remote mode ## Page 96 What port does the Gateway use? Why does moltbot gateway status say Runtime: running but RPC probe: failed? Why does moltbot gateway status show Config (cli) and Config (service) different?
What does “another gateway instance is already listening” mean? How do I run Moltbot in remote mode (client connects to a Gateway elsewhere)? The Control UI says “unauthorized” (or keeps reconnecting). What now?
I set gateway.bind: "tailnet" but it can’t bind / nothing listens Can I run multiple Gateways on the same host? What does “invalid handshake” / code 1008 mean? ## Page 97 Logging and debugging Where are logs? How do I start/stop/restart the Gateway service?
I closed my terminal on Windows - how do I restart Moltbot? The Gateway is up but replies never arrive. What should I check? “Disconnected from gateway: no reason” - what now?
Telegram setMyCommands fails with network errors. What should I check? TUI shows no output. What should I check?
How do I completely stop then start the Gateway? ELI5: moltbot gateway restart vs moltbot gateway ## Page 98 What’s the fastest way to get more details when something fails? Media & attachments My skill generated an image/PDF, but nothing was sent Security and access control Is it safe to expose Moltbot to inbound DMs? Is prompt injection only a concern for public bots?
Should my bot have its own email GitHub account or phone number Can I give it autonomy over my text messages and is that safe Can I use cheaper models for personal assistant tasks? ## Page 99 I ran /start in Telegram but didn’t get a pairing code WhatsApp: will it message my contacts? How does pairing work? Chat commands, aborting tasks, and “it won’t stop” How do I stop internal system messages from showing in chat How do I stop/cancel a running task?
How do I send a Discord message from Telegram? (“Cross-context messaging denied”) Why does it feel like the bot “ignores” rapid‑fire messages? ## Page 100 First 60 seconds if something’s broken 1.Quick status (first check) None 2. moltbot status 3. 4.Fast local summary: OS + update, gateway/service reachability, agents/sessions, provider config + runtime issues (when gateway is reachable). 5.Pasteable report (safe to share) None 6. moltbot status --all 7. ## Page 101 8.Read-only diagnosis with log tail (tokens redacted).
9.Daemon + port state None 10. moltbot gateway status 11. 12. Shows supervisor runtime vs RPC reachability, the probe target URL, and which config the service likely used. 13. Deep probes None 14. moltbot status --deep 15. 16. Runs gateway health checks + provider probes (requires a reachable gateway). See Health. ## Page 102 17. Tail the latest log None 18. moltbot logs --follow 19. 20. If RPC is down, fall back to: None 21. tail -f "$(ls -t /tmp/moltbot/moltbot-*.log | head -1)" 22. 23. File logs are separate from service logs; see Logging and Troubleshooting.
24. Run the doctor (repairs) None 25. moltbot doctor ## Page 103 26. 27. Repairs/migrates config/state + runs health checks. See Doctor. 28. Gateway snapshot None 29. moltbot health --json moltbot health --verbose # shows the target URL + config path on errors 30. 31. Asks the running gateway for a full snapshot (WS-only). See Health.
Quick start and first-run setup ## Page 104 Im stuck whats the fastest way to get unstuck Use a local AI agent that can see your machine.
That is far more effective than asking in Discord, because most “I’m stuck” cases are local config or environment issues that remote helpers cannot inspect. Claude Code: https://www.anthropic.com/claud e-code/ OpenAI Codex: https://openai.com/codex/ These tools can read the repo, run commands, inspect logs, and ## Page 105 help fix your machine-level setup (PATH, services, permissions, auth files). Give them the full source checkout via the hackable (git) install: None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git This installs Moltbot from a git checkout, so the agent can read the code + docs and reason about the exact version you are running. You can always switch ## Page 106 back to stable later by re-running the installer without .
--install-method git Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the necessary commands.
That keeps changes small and easier to audit. If you discover a real bug or fix, please file a GitHub issue or send a PR: https://github.com/moltbot/moltbo t/issues ## Page 107 https://github.com/moltbot/moltbo t/pulls Start with these commands (share outputs when asking for help): None moltbot status moltbot models status moltbot doctor What they do: moltbot status: quick snapshot of gateway/agent health + basic config. moltbot models status: checks provider auth + model availability. ## Page 108 moltbot doctor: validates and repairs common config/state issues.
Other useful CLI checks: moltbot status , moltbot logs --follow, moltbot gateway status, moltbot health --all . --verbose Quick debug loop: First 60 seconds if something’s broken. Install docs: Install, Installer flags, Updating. Whats the recommended way to install and set up Moltbot ## Page 109 The repo recommends running from source and using the onboarding wizard: None curl -fsSL https://molt.bot/install.sh | bash moltbot onboard --install-daemon The wizard can also build UI assets automatically.
After onboarding, you typically run the Gateway on port 18789. From source (contributors/dev): ## Page 110 None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm build pnpm ui:build # auto-installs UI deps on first run moltbot onboard If you don’t have a global install yet, run it via pnpm moltbot onboard. How do I open the dashboard after onboarding The wizard now opens your browser with a tokenized dashboard URL right after onboarding and also ## Page 111 prints the full link (with token) in the summary. Keep that tab open; if it didn’t launch, copy/paste the printed URL on the same machine.
Tokens stay local to your host-nothing is fetched from the browser. How do I authenticate the dashboard token on localhost vs remote Localhost (same machine): Open http://127.0.0.1:18789/. ## Page 112 If it asks for auth, run moltbot dashboard and use the tokenized link (?token=...). The token is the same value as gateway.auth.token (or CLAWDBOT_GATEWAY_TOKEN) and is stored by the UI after first load.
Not on localhost: Tailscale Serve (recommended): keep bind loopback, run moltbot gateway --tailscale serve, open https:///. If gateway.auth.allowTailscale is true, identity headers satisfy auth (no token). Tailnet bind: run moltbot gateway --bind tailnet --token "", open http://:18789/, paste token in dashboard settings. ## Page 113 SSH tunnel: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/?token=...
from moltbot dashboard. See Dashboard and Web surfaces for bind modes and auth details. What runtime do I need Node >= 22 is required. pnpm is recommended.
Bun is not recommended for the Gateway. Does it run on Raspberry Pi ## Page 114 Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it. If you want extra headroom (logs, media, other services), 2GB is recommended, but it’s not a hard minimum.
Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes. ## Page 115 Any tips for Raspberry Pi installs Short version: it works, but expect rough edges. Use a 64-bit OS and keep Node >= 22.
Prefer the hackable (git) install so you can see logs and update fast. Start without channels/skills, then add them one by one. If you hit weird binary issues, it is usually an ARM compatibility problem. Docs: Linux, Install.
## Page 116 It is stuck on wake up my friend onboarding will not hatch What now That screen depends on the Gateway being reachable and authenticated. The TUI also sends “Wake up, my friend!” automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran. 1.Restart the Gateway: None moltbot gateway restart ## Page 117 2.Check status + auth: None moltbot status moltbot models status moltbot logs --follow 3.If it still hangs, run: None moltbot doctor If the Gateway is remote, ensure the tunnel/Tailscale connection is up and that the UI is pointed ## Page 118 at the right Gateway.
See Remote access. Can I migrate my setup to a new machine Mac mini without redoing onboarding Yes. Copy the state directory and workspace, then run Doctor once.
This keeps your bot “exactly the same” (memory, session history, auth, and channel state) as long as you copy both locations: 1.Install Moltbot on the new machine. ## Page 119 2.Copy $CLAWDBOT_STATE_DIR (default: ~/.clawdbot) from the old machine. 3.Copy your workspace (default: ~/clawd). 4.Run moltbot doctor and restart the Gateway service.
That preserves config, auth profiles, WhatsApp creds, sessions, and memory. If you’re in remote mode, remember the gateway host owns the session store and workspace. Important: if you only commit/push your workspace to GitHub, you’re backing up memory ## Page 120 + bootstrap files, but not session history or auth. Those live under ~/.clawdbot/ (for example ).
~/.clawdbot/agents//sessions/ Related: Migrating, Where things live on disk, Agent workspace, Doctor, Remote mode. Where do I see whats new in the latest version Check the GitHub changelog: https://github.com/moltbot/moltbo t/blob/main/CHANGELOG.md ## Page 121 Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).
I cant access docsmoltbot SSL error What now Some Comcast/Xfinity connections incorrectly block docs.molt.bot via Xfinity Advanced Security. ## Page 122 Disable it or allowlist docs.molt.bot, then retry. More detail: Troubleshooting. Please help us unblock it by reporting here: https://spa.xfinity.com/check_url _status.
If you still can’t reach the site, the docs are mirrored on GitHub: https://github.com/moltbot/moltbo t/tree/main/docs Whats the difference between stable and beta ## Page 123 Stable and beta are npm dist‑tags, not separate code lines: latest = stable beta = early build for testing We ship builds to beta, test them, and once a build is solid we promote that same version to .
That’s why beta and stable latest can point at the same version. See what changed: https://github.com/moltbot/moltbo t/blob/main/CHANGELOG.md ## Page 124 How do I install the beta version and whats the difference between beta and dev Beta is the npm dist‑tag beta (may match latest). Dev is the moving head of main (git); when published, it uses the npm dist‑tag dev.
One‑liners (macOS/Linux): None curl -fsSL --proto '=https' --tlsv1.2 https://molt.bot/install.sh | bash -s -- --beta None ## Page 125 curl -fsSL --proto '=https' --tlsv1.2 https://molt.bot/install.sh | bash -s -- --install-method git Windows installer (PowerShell): https://molt.bot/install.ps1 More detail: Development channels and Installer flags. How long does install and onboarding usually take Rough guide: Install: 2-5 minutes Onboarding: 5-15 minutes depending on how many channels/models you configure ## Page 126 If it hangs, use Installer stuck and the fast debug loop in Im stuck. How do I try the latest bits Two options: 1.Dev channel (git checkout): None moltbot update --channel dev This switches to the main branch and updates from source. ## Page 127 2.Hackable install (from the installer site): None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git That gives you a local repo you can edit, then update via git.
If you prefer a clean clone manually, use: None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install ## Page 128 pnpm build Docs: Update, Development channels, Install. Installer stuck How do I get more feedback Re-run the installer with verbose output: None curl -fsSL https://molt.bot/install.sh | bash -s -- --verbose Beta install with verbose: ## Page 129 None curl -fsSL https://molt.bot/install.sh | bash -s -- --beta --verbose For a hackable (git) install: None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git --verbose More options: Installer flags. Windows install says git not found or moltbot not recognized ## Page 130 Two common Windows issues: 1) npm error spawn git / git not found Install Git for Windows and make sure git is on your PATH. Close and reopen PowerShell, then re-run the installer.
- moltbot is not recognized after install Your npm global bin folder is not on PATH. Check the path: None npm config get prefix ## Page 131 Ensure
\bin is on PATH (on most systems it is %AppData%\npm). Close and reopen PowerShell after updating PATH. If you want the smoothest Windows setup, use WSL2 instead of native Windows.
Docs: Windows. The docs didnt answer my question how do I get a better answer Use the hackable (git) install so you have the full source and docs ## Page 132 locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely. None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git More detail: Install and Installer flags. How do I install Moltbot on Linux ## Page 133 Short answer: follow the Linux guide, then run the onboarding wizard.
Linux quick path + service install: Linux. Full walkthrough: Getting Started. Installer + updates: Install & updates. How do I install Moltbot on a VPS Any Linux VPS works.
Install on the server, then use SSH/Tailscale to reach the Gateway. ## Page 134 Guides: exe.dev, Hetzner, Fly.io. Remote access: Gateway remote. Where are the cloudVPS install guides We keep a hosting hub with the common providers.
Pick one and follow the guide: VPS hosting (all providers in one place) Fly.io Hetzner exe.dev ## Page 135 How it works in the cloud: the Gateway runs on the server, and you access it from your laptop/phone via the Control UI (or Tailscale/SSH). Your state + workspace live on the server, so treat the host as the source of truth and back it up. You can pair nodes (Mac/iOS/Android/headless) to that cloud Gateway to access local screen/camera/canvas or run commands on your laptop while keeping the Gateway in the cloud. ## Page 136 Hub: Platforms.
Remote access: Gateway remote. Nodes: Nodes, Nodes CLI. Can I ask Clawd to update itself Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation.
Safer: run updates from a shell as the operator. ## Page 137 Use the CLI: None moltbot update moltbot update status moltbot update --channel stable|beta|dev moltbot update --tag <dist-tag|version> moltbot update --no-restart If you must automate from an agent: None moltbot update --yes --no-restart moltbot gateway restart Docs: Update, Updating. ## Page 138 What does the onboarding wizard actually do moltbot onboard is the recommended setup path. In local mode it walks you through: Model/auth setup (Anthropic setup-token recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported) Workspace location + bootstrap files Gateway settings (bind/port/auth/tailscale) Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage) ## Page 139 Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2) Health checks and skills selection It also warns if your configured model is unknown or missing auth.
Do I need a Claude or OpenAI subscription to run this No. You can run Moltbot with API keys (Anthropic/OpenAI/others) or with local‑only models so your data stays on your device. Subscriptions (Claude Pro/Max or ## Page 140 OpenAI Codex) are optional ways to authenticate those providers. Docs: Anthropic, OpenAI, Local models, Models.
Can I use Claude Max subscription without an API key Yes. You can authenticate with a setup-token instead of an API key.
This is the subscription path. Claude Pro/Max subscriptions do not include an API key, so this ## Page 141 is the correct approach for subscription accounts. Important: you must verify with Anthropic that this usage is allowed under their subscription policy and terms. If you want the most explicit, supported path, use an Anthropic API key.
How does Anthropic setuptoken auth work claude setup-token generates a token string via the Claude Code CLI (it is not available in the web ## Page 142 console). You can run it on any machine. Choose Anthropic token (paste setup-token) in the wizard or paste it with moltbot models auth paste-token . The token is stored as --provider anthropic an auth profile for the anthropic provider and used like an API key (no auto-refresh).
More detail: OAuth. Where do I find an Anthropic setuptoken It is not in the Anthropic Console. The setup-token is ## Page 143 generated by the Claude Code CLI on any machine: None claude setup-token Copy the token it prints, then choose Anthropic token (paste setup-token) in the wizard. If you want to run it on the gateway host, use moltbot models auth setup-token --provider anthropic.
If you ran claude setup-token elsewhere, paste it on the gateway host with ## Page 144 . See moltbot models auth paste-token --provider anthropic Anthropic. Do you support Claude subscription auth (Claude Pro/Max) Yes — via setup-token. Moltbot no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key.
Generate the token anywhere and paste it on the gateway host. See Anthropic and OAuth.
Note: Claude subscription access is governed by Anthropic’s terms. ## Page 145 For production or multi‑user workloads, API keys are usually the safer choice. Why am I seeing HTTP 429 ratelimiterror from Anthropic That means your Anthropic quota/rate limit is exhausted for the current window. If you use a Claude subscription (setup‑token or Claude Code OAuth), wait for the window to reset or upgrade your plan.
If you use an Anthropic API key, check the ## Page 146 Anthropic Console for usage/billing and raise limits as needed. Tip: set a fallback model so Moltbot can keep replying while a provider is rate‑limited. See Models and OAuth. Is AWS Bedrock supported Yes - via pi‑ai’s Amazon Bedrock (Converse) provider with manual config.
You must supply AWS credentials/region on the gateway ## Page 147 host and add a Bedrock provider entry in your models config. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI‑compatible proxy in front of Bedrock is still a valid option. How does Codex auth work Moltbot supports OpenAI Code (Codex) via OAuth (ChatGPT sign-in).
The wizard can run the ## Page 148 OAuth flow and will set the default model to openai-codex/gpt-5.2 when appropriate. See Model providers and Wizard. Do you support OpenAI subscription auth Codex OAuth Yes. Moltbot fully supports OpenAI Code (Codex) subscription OAuth.
The onboarding wizard can run the OAuth flow for you. See OAuth, Model providers, and Wizard. ## Page 149 How do I set up Gemini CLI OAuth Gemini CLI uses a plugin auth flow, not a client id or secret in moltbot.json. Steps: 1.Enable the plugin: moltbot plugins enable google-gemini-cli-auth 2.Login: moltbot models auth login --provider google-gemini-cli --set-default This stores OAuth tokens in auth profiles on the gateway host.
Details: Model providers. ## Page 150 Is a local model OK for casual chats Usually no. Moltbot needs large context + strong safety; small cards truncate and leak. If you must, run the largest MiniMax M2.1 build you can locally (LM Studio) and see /gateway/local-models.
Smaller/quantized models increase prompt-injection risk - see Security. How do I keep hosted model traffic in a specific region ## Page 151 Pick region-pinned endpoints. OpenRouter exposes US-hosted options for MiniMax, Kimi, and GLM; choose the US-hosted variant to keep data in-region. You can still list Anthropic/OpenAI alongside these by using models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.
Do I have to buy a Mac Mini to install this ## Page 152 No. Moltbot runs on macOS or Linux (Windows via WSL2). A Mac mini is optional - some people buy one as an always‑on host, but a small VPS, home server, or Raspberry Pi‑class box works too. You only need a Mac for macOS‑only tools.
For iMessage, you can keep the Gateway on Linux and run imsg on any Mac over SSH by pointing channels.imessage.cliPath at an SSH wrapper. If you want other macOS‑only tools, run the Gateway on a Mac or pair a macOS node. ## Page 153 Docs: iMessage, Nodes, Mac remote mode. Do I need a Mac mini for iMessage support You need some macOS device signed into Messages.
It does not have to be a Mac mini - any Mac works. Moltbot’s iMessage integrations run on macOS (BlueBubbles or imsg), while the Gateway can run elsewhere. Common setups: ## Page 154 Run the Gateway on Linux/VPS, and point channels.imessage.cliPath at an SSH wrapper that runs imsg on the Mac. Run everything on the Mac if you want the simplest single‑machine setup.
Docs: iMessage, BlueBubbles, Mac remote mode. If I buy a Mac mini to run Moltbot can I connect it to my MacBook Pro Yes. The Mac mini can run the Gateway, and your MacBook Pro can connect as a node (companion ## Page 155 device). Nodes don’t run the Gateway - they provide extra capabilities like screen/camera/canvas and system.run on that device.
Common pattern: Gateway on the Mac mini (always‑on). MacBook Pro runs the macOS app or a node host and pairs to the Gateway. Use moltbot nodes status / moltbot nodes list to see it. Docs: Nodes, Nodes CLI.
## Page 156 Can I use Bun Bun is not recommended. We see runtime bugs, especially with WhatsApp and Telegram. Use Node for stable gateways. If you still want to experiment with Bun, do it on a non‑production gateway without WhatsApp/Telegram.
Telegram what goes in allowFrom channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric, ## Page 157 recommended) or @username. It is not the bot username. Safer (no third-party bot): DM your bot, then run moltbot logs --follow and read from.id. Official Bot API: DM your bot, then call https://api.telegram.org/bot<bot_token>/getUpdates and read message.from.id.
Third-party (less private): DM @userinfobot or @getidsbot. See /channels/telegram. ## Page 158 Can multiple people use one WhatsApp number with different Moltbots Yes, via multi‑agent routing. Bind each sender’s WhatsApp DM (peer , sender E.164 like +15551234567) to kind: "dm" a different agentId, so each person gets their own workspace and session store.
Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp. ## Page 159 Can I run a fast chat agent and an Opus for coding agent Yes. Use multi‑agent routing: give each agent its own default model, then bind inbound routes (provider account or specific peers) to each agent.
Example config lives in Multi-Agent Routing. See also Models and Configuration. Does Homebrew work on Linux ## Page 160 Yes. Homebrew supports Linux (Linuxbrew).
Quick setup: None /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.s h)" echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.profile eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" brew install If you run Moltbot via systemd, ensure the service PATH includes /home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non‑login shells. Recent builds also prepend common ## Page 161 user bin dirs on Linux systemd services (for example ~/.local/bin, , ~/.local/share/pnpm, ~/.bun/bin) and ~/.npm-global/bin honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, , NVM_DIR, and FNM_DIR when set. ASDF_DATA_DIR Whats the difference between the hackable git install and npm install Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
npm install: global CLI install, no repo, best for ## Page 162 “just run it.” Updates come from npm dist‑tags. Docs: Getting started, Updating. Can I switch between npm and git installs later Yes. Install the other flavor, then run Doctor so the gateway service points at the new entrypoint.
This does not delete your data - it only changes the Moltbot code install. Your state (/.clawdbot) and workspace (/clawd) stay untouched. ## Page 163 From npm → git: None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm build moltbot doctor moltbot gateway restart From git → npm: None npm install -g moltbot@latest moltbot doctor moltbot gateway restart ## Page 164 Doctor detects a gateway service entrypoint mismatch and offers to rewrite the service config to match the current install (use --repair in automation). Backup tips: see Backup strategy.
Should I run the Gateway on my laptop or a VPS Short answer: if you want 24/7 reliability, use a VPS. If you want the lowest friction and you’re okay with sleep/restarts, run it locally. ## Page 165 Laptop (local Gateway) Pros: no server cost, direct access to local files, live browser window. Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
VPS / cloud Pros: always‑on, stable network, no laptop sleep issues, easier to keep running. Cons: often run headless (use screenshots), remote file access only, you must SSH for updates. Moltbot-specific note: WhatsApp/Telegram/Slack/Mattermos ## Page 166 t (plugin)/Discord all work fine from a VPS. The only real trade-off is headless browser vs a visible window.
See Browser. Recommended default: VPS if you had gateway disconnects before. Local is great when you’re actively using the Mac and want local file access or UI automation with a visible browser. How important is it to run Moltbot on a dedicated machine ## Page 167 Not required, but recommended for reliability and isolation.
Dedicated host (VPS/Mac mini/Pi): always‑on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running. Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates. If you want the best of both worlds, keep the Gateway on a dedicated host and pair your laptop as a node for local screen/camera/exec tools. See ## Page 168 Nodes.
For security guidance, read Security. What are the minimum VPS requirements and recommended OS Moltbot is lightweight. For a basic Gateway + one chat channel: Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk. Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels).
Node tools and browser automation can be resource hungry. ## Page 169 OS: use Ubuntu LTS (or any modern Debian/Ubuntu). The Linux install path is best tested there. Docs: Linux, VPS hosting.
Can I run Moltbot in a VM and what are the requirements Yes. Treat a VM the same as a VPS: it needs to be always on, reachable, and have enough RAM for the Gateway and any channels you enable. Baseline guidance: ## Page 170 Absolute minimum: 1 vCPU, 1GB RAM. Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
OS: Ubuntu LTS or another modern Debian/Ubuntu. If you are on Windows, WSL2 is the easiest VM style setup and has the best tooling compatibility. See Windows, VPS hosting. If you are running macOS in a VM, see macOS VM.
What is Moltbot? ## Page 171 What is Moltbot in one paragraph Moltbot is a personal AI assistant you run on your own devices. It replies on the messaging surfaces you already use (WhatsApp, Telegram, Slack, Mattermost (plugin), Discord, Google Chat, Signal, iMessage, WebChat) and can also do voice + a live Canvas on supported platforms. The Gateway is the always-on control plane; the assistant is the product.
Page 172 Whats the value proposition Moltbot is not “just a Claude wrapper.” It’s a local-first control plane that lets you run a capable assistant on your own hardware, reachable from the chat apps you already use, with stateful sessions, memory, and tools - without handing control of your workflows to a hosted SaaS. Highlights: ## Page 173 Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local. Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord /Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms. Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per‑agent routing and failover.
Local-only option: run local models so all data can stay on your device if you want. Multi-agent routing: separate agents per channel, account, or ## Page 174 task, each with its own workspace and defaults. Open source and hackable: inspect, extend, and self-host without vendor lock‑in. Docs: Gateway, Channels, Multi‑agent, Memory.
I just set it up what should I do first Good first projects: Build a website (WordPress, Shopify, or a simple static site). Prototype a mobile app (outline, screens, API plan). ## Page 175 Organize files and folders (cleanup, naming, tagging). Connect Gmail and automate summaries or follow ups.
It can handle large tasks, but it works best when you split them into phases and use sub agents for parallel work. What are the top five everyday use cases for Moltbot Everyday wins usually look like: Personal briefings: summaries of inbox, calendar, and news you care about. ## Page 176 Research and drafting: quick research, summaries, and first drafts for emails or docs. Reminders and follow ups: cron or heartbeat driven nudges and checklists.
Browser automation: filling forms, collecting data, and repeating web tasks. Cross device coordination: send a task from your phone, let the Gateway run it on a server, and get the result back in chat. Can Moltbot help with lead gen outreach ads and blogs for a SaaS Yes for research, qualification, and drafting. It can scan sites, ## Page 177 build shortlists, summarize prospects, and write outreach or ad copy drafts.
For outreach or ad runs, keep a human in the loop. Avoid spam, follow local laws and platform policies, and review anything before it is sent. The safest pattern is to let Moltbot draft and you approve. Docs: Security.
## Page 178 What are the advantages vs Claude Code for web development Moltbot is a personal assistant and coordination layer, not an IDE replacement. Use Claude Code or Codex for the fastest direct coding loop inside a repo. Use Moltbot when you want durable memory, cross-device access, and tool orchestration. Advantages: Persistent memory + workspace across sessions ## Page 179 Multi-platform access (WhatsApp, Telegram, TUI, WebChat) Tool orchestration (browser, files, scheduling, hooks) Always-on Gateway (run on a VPS, interact from anywhere) Nodes for local browser/screen/camera/exec Showcase: https://molt.bot/showcase Skills and automation How do I customize skills without keeping the repo dirty ## Page 180 Use managed overrides instead of editing the repo copy.
Put your changes in ~/.clawdbot/skills//SKILL.md (or add a folder via skills.load.extraDirs in ). Precedence is ~/.clawdbot/moltbot.json /skills > ~/.clawdbot/skills > bundled, so managed overrides win without touching git. Only upstream-worthy edits should live in the repo and go out as PRs. Can I load skills from a custom folder ## Page 181 Yes.
Add extra directories via skills.load.extraDirs in ~/.clawdbot/moltbot.json (lowest precedence). Default precedence remains: /skills → ~/.clawdbot/skills → bundled → skills.load.extraDirs. clawdhub installs into ./skills by default, which Moltbot treats as /skills. How can I use different models for different tasks Today the supported patterns are: Cron jobs: isolated jobs can set a model override per job.
Page 182 Sub-agents: route tasks to separate agents with different default models. On-demand switch: use /model to switch the current session model at any time. See Cron jobs, Multi-Agent Routing, and Slash commands. The bot freezes while doing heavy work How do I offload that Use sub-agents for long or parallel tasks.
Sub-agents run in their own session, return a ## Page 183 summary, and keep your main chat responsive. Ask your bot to “spawn a sub-agent for this task” or use . Use /status in chat to see what /subagents the Gateway is doing right now (and whether it is busy). Token tip: long tasks and sub-agents both consume tokens.
If cost is a concern, set a cheaper model for sub-agents via . agents.defaults.subagents.model Docs: Sub-agents. ## Page 184 Cron or reminders do not fire What should I check Cron runs inside the Gateway process. If the Gateway is not running continuously, scheduled jobs will not run.
Checklist: Confirm cron is enabled (cron.enabled) and CLAWDBOT_SKIP_CRON is not set. Check the Gateway is running 24/7 (no sleep/restarts). Verify timezone settings for the job (--tz vs host timezone). Debug: ## Page 185 None moltbot cron run --force moltbot cron runs --id --limit 50 Docs: Cron jobs, Cron vs Heartbeat.
How do I install skills on Linux Use ClawdHub (CLI) or drop skills into your workspace. The macOS Skills UI isn’t available on Linux. Browse skills at https://clawdhub.com. ## Page 186 Install the ClawdHub CLI (pick one package manager): None npm i -g clawdhub None pnpm add -g clawdhub Can Moltbot run tasks on a schedule or continuously in the background Yes.
Use the Gateway scheduler: ## Page 187 Cron jobs for scheduled or recurring tasks (persist across restarts). Heartbeat for “main session” periodic checks. Isolated jobs for autonomous agents that post summaries or deliver to chats. Docs: Cron jobs, Cron vs Heartbeat, Heartbeat.
Can I run Apple macOS only skills from Linux Not directly. macOS skills are gated by metadata.clawdbot.os plus required binaries, and skills only appear in the system prompt when they ## Page 188 are eligible on the Gateway host. On Linux, darwin-only skills (like , apple-notes, apple-reminders) will not load imsg unless you override the gating. You have three supported patterns: Option A - run the Gateway on a Mac (simplest).
Run the Gateway where the macOS binaries exist, then connect from Linux in remote mode or over Tailscale. The skills load ## Page 189 normally because the Gateway host is macOS. Option B - use a macOS node (no SSH). Run the Gateway on Linux, pair a macOS node (menubar app), and set Node Run Commands to “Always Ask” or “Always Allow” on the Mac.
Moltbot can treat macOS-only skills as eligible when the required binaries exist on the node. The agent runs those skills via the nodes tool. If you choose “Always Ask”, approving “Always ## Page 190 Allow” in the prompt adds that command to the allowlist. Option C - proxy macOS binaries over SSH (advanced).
Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac.
Then override the skill to allow Linux so it stays eligible. 1.Create an SSH wrapper for the binary (example: imsg): None 2. ## Page 191 #!/usr/bin/env bash set -euo pipefail exec ssh -T user@mac-host /opt/homebrew/bin/imsg "$@" 3. 4.Put the wrapper on PATH on the Linux host (for example ~/bin/imsg). 5.Override the skill metadata (workspace or ~/.clawdbot/skills) to allow Linux: None 6. --- name: imsg description: iMessage/SMS CLI for listing chats, history, watch, and sending. metadata: {"moltbot":{"os":["darwin","linux"],"requires":{"bins":["imsg"]}} } --- 7. ## Page 192 8.Start a new session so the skills snapshot refreshes.
For iMessage specifically, you can also point channels.imessage.cliPath at an SSH wrapper (Moltbot only needs stdio). See iMessage. Do you have a Notion or HeyGen integration Not built‑in today. Options: Custom skill / plugin: best for reliable API access (Notion/HeyGen both have APIs).
Page 193 Browser automation: works without code but is slower and more fragile. If you want to keep context per client (agency workflows), a simple pattern is: One Notion page per client (context + preferences + active work). Ask the agent to fetch that page at the start of a session. If you want a native integration, open a feature request or build a skill targeting those APIs.
Install skills: ## Page 194 None clawdhub install clawdhub update --all ClawdHub installs into ./skills under your current directory (or falls back to your configured Moltbot workspace); Moltbot treats that as /skills on the next session. For shared skills across agents, place them in ~/.clawdbot/skills//SKILL.md. Some skills expect binaries installed via Homebrew; on Linux that means Linuxbrew (see the ## Page 195 Homebrew Linux FAQ entry above). See Skills and ClawdHub.
How do I install the Chrome extension for browser takeover Use the built-in installer, then load the unpacked extension in Chrome: None moltbot browser extension install moltbot browser extension path ## Page 196 Then Chrome → chrome://extensions → enable “Developer mode” → “Load unpacked” → pick that folder. Full guide (including remote Gateway + security notes): Chrome extension If the Gateway runs on the same machine as Chrome (default setup), you usually do not need anything extra. If the Gateway runs elsewhere, run a node host on the browser machine so the Gateway can proxy browser actions. You still need to click ## Page 197 the extension button on the tab you want to control (it doesn’t auto-attach).
Sandboxing and memory Is there a dedicated sandboxing doc Yes. See Sandboxing. For Docker-specific setup (full gateway in Docker or sandbox images), see Docker. ## Page 198 Can I keep DMs personal but make groups public sandboxed with one agent Yes - if your private traffic is DMs and your public traffic is groups.
Use agents.defaults.sandbox.mode: "non-main" so group/channel sessions (non-main keys) run in Docker, while the main DM session stays on-host.
Then restrict what tools are available in sandboxed sessions via tools.sandbox.tools. ## Page 199 Setup walkthrough + example config: Groups: personal DMs + public groups Key config reference: Gateway configuration How do I bind a host folder into the sandbox Set agents.defaults.sandbox.docker.binds to ["host:path:mode"] (e.g., "/home/user/src:/src:ro"). Global + per-agent binds merge; per-agent binds are ignored when scope: "shared". Use :ro for anything sensitive and remember binds bypass the sandbox ## Page 200 filesystem walls.
See Sandboxing and Sandbox vs Tool Policy vs Elevated for examples and safety notes. How does memory work Moltbot memory is just Markdown files in the agent workspace: Daily notes in memory/YYYY-MM-DD.md Curated long-term notes in MEMORY.md (main/private sessions only) Moltbot also runs a silent pre-compaction memory flush to ## Page 201 remind the model to write durable notes before auto-compaction.
This only runs when the workspace is writable (read-only sandboxes skip it). See Memory. Memory keeps forgetting things How do I make it stick Ask the bot to write the fact to memory. Long-term notes belong in , short-term context goes into MEMORY.md .
memory/YYYY-MM-DD.md This is still an area we are improving. It helps to remind the ## Page 202 model to store memories; it will know what to do. If it keeps forgetting, verify the Gateway is using the same workspace on every run. Docs: Memory, Agent workspace.
Does semantic memory search require an OpenAI API key Only if you use OpenAI embeddings. Codex OAuth covers chat/completions and does not grant embeddings access, so signing in with Codex (OAuth or ## Page 203 the Codex CLI login) does not help for semantic memory search. OpenAI embeddings still need a real API key (OPENAI_API_KEY or ). models.providers.openai.apiKey If you don’t set a provider explicitly, Moltbot auto-selects a provider when it can resolve an API key (auth profiles, , or env vars).
It models.providers.*.apiKey prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key resolves. If neither key is available, memory search ## Page 204 stays disabled until you configure it. If you have a local model path configured and present, Moltbot prefers local. If you’d rather stay local, set memorySearch.provider = "local" (and optionally ).
If you want Gemini memorySearch.fallback = "none" embeddings, set memorySearch.provider = "gemini" and provide GEMINI_API_KEY (or memorySearch.remote.apiKey). We support OpenAI, Gemini, or local embedding models - see Memory for the setup details. ## Page 205 Does memory persist forever What are the limits Memory files live on disk and persist until you delete them. The limit is your storage, not the model.
The session context is still limited by the model context window, so long conversations can compact or truncate.
That is why memory search exists - it pulls only the relevant parts back into context. Docs: Memory, Context. ## Page 206 Where things live on disk Is all data used with Moltbot saved locally No - Moltbot’s state is local, but external services still see what you send them. Local by default: sessions, memory files, config, and workspace live on the Gateway host (~/.clawdbot + your workspace directory).
Remote by necessity: messages you send to model providers (Anthropic/OpenAI/etc.) go to their APIs, and chat platforms (WhatsApp/Telegram/Slack/etc.) store message data on their servers. ## Page 207 You control the footprint: using local models keeps prompts on your machine, but channel traffic still goes through the channel’s servers. Related: Agent workspace, Memory. Where does Moltbot store its data Everything lives under $CLAWDBOT_STATE_DIR (default: ~/.clawdbot): Path Purpose $CLAWDBOT_STATE_DMain config (JSON5) IR/moltbot.json ## Page 208 $CLAWDBOT_STATE_DLegacy OAuth import IR/credentials/oa(copied into auth profiles on first uth.json use) $CLAWDBOT_STATE_DAuth profiles IR/agents/<agentI(OAuth + API keys) d>/agent/auth-pro files.json $CLAWDBOT_STATE_DRuntime auth cache IR/agents/<agentI(managed automatically) d>/agent/auth.jso n $CLAWDBOT_STATE_DProvider state IR/credentials/ (e.g.
Page 209 whatsapp//creds.json) $CLAWDBOT_STATE_DPer‑agent state IR/agents/ (agentDir + sessions) $CLAWDBOT_STATE_DConversation IR/agents/<agentIhistory & state (per agent) d>/sessions/ $CLAWDBOT_STATE_DSession metadata IR/agents/<agentI(per agent) d>/sessions/sessi ons.json Legacy single‑agent path: ~/.clawdbot/agent/* (migrated by moltbot doctor). ## Page 210 Your workspace (AGENTS.md, memory files, skills, etc.) is separate and configured via agents.defaults.workspace (default: ~/clawd). Where should AGENTSmd SOULmd USERmd MEMORYmd live These files live in the agent workspace, not ~/.clawdbot. Workspace (per agent): AGENTS.md, SOUL.md, IDENTITY.md, USER.md, MEMORY.md (or memory.md), memory/YYYY-MM-DD.md, optional HEARTBEAT.md.
State dir (/.clawdbot): config, credentials, auth profiles, ## Page 211 sessions, logs, and shared skills (/.clawdbot/skills). Default workspace is /clawd, configurable via: None { agents: { defaults: { workspace: "/clawd" } } } If the bot “forgets” after a restart, confirm the Gateway is using the same workspace on every launch (and remember: remote mode uses the gateway host’s ## Page 212 workspace, not your local laptop). Tip: if you want a durable behavior or preference, ask the bot to write it into AGENTS.md or MEMORY.md rather than relying on chat history. See Agent workspace and Memory.
Whats the recommended backup strategy Put your agent workspace in a private git repo and back it up ## Page 213 somewhere private (for example GitHub private).
This captures memory + AGENTS/SOUL/USER files, and lets you restore the assistant’s “mind” later. Do not commit anything under ~/.clawdbot (credentials, sessions, tokens). If you need a full restore, back up both the workspace and the state directory separately (see the migration question above). Docs: Agent workspace.
## Page 214 How do I completely uninstall Moltbot See the dedicated guide: Uninstall. Can agents work outside the workspace Yes. The workspace is the default cwd and memory anchor, not a hard sandbox. Relative paths resolve inside the workspace, but absolute paths can access other host locations unless sandboxing is enabled.
If you need isolation, use agents.defaults.sandbox or ## Page 215 per‑agent sandbox settings. If you want a repo to be the default working directory, point that agent’s workspace to the repo root. The Moltbot repo is just source code; keep the workspace separate unless you intentionally want the agent to work inside it. Example (repo as default cwd): None { agents: { defaults: { workspace: "~/Projects/my-repo" ## Page 216 } } } Im in remote mode where is the session store Session state is owned by the gateway host.
If you’re in remote mode, the session store you care about is on the remote machine, not your local laptop. See Session management. ## Page 217 Config basics What format is the config Where is it Moltbot reads an optional JSON5 config from $CLAWDBOT_CONFIG_PATH (default: ): ~/.clawdbot/moltbot.json None $CLAWDBOT_CONFIG_PATH If the file is missing, it uses safe‑ish defaults (including a default workspace of ~/clawd). ## Page 218 I set gatewaybind lan or tailnet and now nothing listens the UI says unauthorized Non-loopback binds require auth.
Configure gateway.auth.mode + gateway.auth.token (or use CLAWDBOT_GATEWAY_TOKEN). None { gateway: { bind: "lan", auth: { mode: "token", token: "replace-me" } } } Notes: ## Page 219 gateway.remote.token is for remote CLI calls only; it does not enable local gateway auth. The Control UI authenticates via connect.params.auth.token (stored in app/UI settings). Avoid putting tokens in URLs.
Why do I need a token on localhost now The wizard generates a gateway token by default (even on loopback) so local WS clients must authenticate.
This blocks other local processes from calling the Gateway. Paste the ## Page 220 token into the Control UI settings (or your client config) to connect. If you really want open loopback, remove gateway.auth from your config. Doctor can generate a token for you any time: moltbot doctor --generate-gateway-token.
Do I have to restart after changing config The Gateway watches the config and supports hot‑reload: ## Page 221 gateway.reload.mode: "hybrid" (default): hot‑apply safe changes, restart for critical ones hot, restart, off are also supported How do I enable web search and web fetch web_fetch works without an API key. web_search requires a Brave Search API key. Recommended: run moltbot configure --section web to store it in tools.web.search.apiKey. Environment alternative: set BRAVE_API_KEY for the Gateway process.
None ## Page 222 { tools: { web: { search: { enabled: true, apiKey: "BRAVE_API_KEY_HERE", maxResults: 5 }, fetch: { enabled: true } } } } Notes: If you use allowlists, add web_search/web_fetch or group:web. web_fetch is enabled by default (unless explicitly disabled). ## Page 223 Daemons read env vars from ~/.clawdbot/.env (or the service environment). Docs: Web tools.
How do I run a central Gateway with specialized workers across devices The common pattern is one Gateway (e.g. Raspberry Pi) plus nodes and agents: Gateway (central): owns channels (Signal/WhatsApp), routing, and sessions. Nodes (devices): Macs/iOS/Android connect as ## Page 224 peripherals and expose local tools (system.run, canvas, camera). Agents (workers): separate brains/workspaces for special roles (e.g.
“Hetzner ops”, “Personal data”). Sub‑agents: spawn background work from a main agent when you want parallelism. TUI: connect to the Gateway and switch agents/sessions. Docs: Nodes, Remote access, Multi-Agent Routing, Sub-agents, TUI.
Can the Moltbot browser run headless ## Page 225 Yes. It’s a config option: None { browser: { headless: true }, agents: { defaults: { sandbox: { browser: { headless: true } } } } } Default is false (headful). Headless is more likely to trigger anti‑bot checks on some sites. See Browser.
Page 226 Headless uses the same Chromium engine and works for most automation (forms, clicks, scraping, logins). The main differences: No visible browser window (use screenshots if you need visuals). Some sites are stricter about automation in headless mode (CAPTCHAs, anti‑bot).
For example, X/Twitter often blocks headless sessions. How do I use Brave for browser control ## Page 227 Set browser.executablePath to your Brave binary (or any Chromium-based browser) and restart the Gateway. See the full config examples in Browser. Remote gateways + nodes How do commands propagate between Telegram the gateway and nodes Telegram messages are handled by the gateway.
The gateway runs the agent and only then calls nodes ## Page 228 over the Gateway WebSocket when a node tool is needed: Telegram → Gateway → Agent → node.* → Node → Gateway → Telegram Nodes don’t see inbound provider traffic; they only receive node RPC calls. How can my agent access my computer if the Gateway is hosted remotely Short answer: pair your computer as a node. The Gateway runs elsewhere, but it can call node.* ## Page 229 tools (screen, camera, system) on your local machine over the Gateway WebSocket. Typical setup: 1.Run the Gateway on the always‑on host (VPS/home server).
2.Put the Gateway host + your computer on the same tailnet. 3.Ensure the Gateway WS is reachable (tailnet bind or SSH tunnel). 4.Open the macOS app locally and connect in Remote over SSH mode (or direct tailnet) so it can register as a node. 5.Approve the node on the Gateway: ## Page 230 None 6. moltbot nodes pending moltbot nodes approve 7. No separate TCP bridge is required; nodes connect over the Gateway WebSocket.
Security reminder: pairing a macOS node allows system.run on that machine. Only pair devices you trust, and review Security. Docs: Nodes, Gateway protocol, macOS remote mode, Security. ## Page 231 Tailscale is connected but I get no replies What now Check the basics: Gateway is running: moltbot gateway status Gateway health: moltbot status Channel health: moltbot channels status Then verify auth and routing: If you use Tailscale Serve, make sure gateway.auth.allowTailscale is set correctly.
If you connect via SSH tunnel, confirm the local tunnel is up and points at the right port. Confirm your allowlists (DM or group) include your account. ## Page 232 Docs: Tailscale, Remote access, Channels. Can two Moltbots talk to each other local VPS Yes.
There is no built-in “bot-to-bot” bridge, but you can wire it up in a few reliable ways: Simplest: use a normal chat channel both bots can access (Telegram/Slack/WhatsApp). Have Bot A send a message to Bot B, then let Bot B reply as usual. ## Page 233 CLI bridge (generic): run a script that calls the other Gateway with moltbot agent --message ... --deliver, targeting a chat where the other bot listens.
If one bot is on a remote VPS, point your CLI at that remote Gateway via SSH/Tailscale (see Remote access). Example pattern (run from a machine that can reach the target Gateway): None ## Page 234 moltbot agent --message "Hello from local bot" --deliver --channel telegram --reply-to Tip: add a guardrail so the two bots do not loop endlessly (mention-only, channel allowlists, or a “do not reply to bot messages” rule). Docs: Remote access, Agent CLI, Agent send. Do I need separate VPSes for multiple agents ## Page 235 No.
One Gateway can host multiple agents, each with its own workspace, model defaults, and routing.
That is the normal setup and it is much cheaper and simpler than running one VPS per agent. Use separate VPSes only when you need hard isolation (security boundaries) or very different configs that you do not want to share. Otherwise, keep one Gateway and use multiple agents or sub-agents. ## Page 236 Is there a benefit to using a node on my personal laptop instead of SSH from a VPS Yes - nodes are the first‑class way to reach your laptop from a remote Gateway, and they unlock more than shell access.
The Gateway runs on macOS/Linux (Windows via WSL2) and is lightweight (a small VPS or Raspberry Pi-class box is fine; 4 GB RAM is plenty), so a common setup is an always‑on host plus your laptop as a node. ## Page 237 No inbound SSH required. Nodes connect out to the Gateway WebSocket and use device pairing. Safer execution controls.
system.run is gated by node allowlists/approvals on that laptop. More device tools. Nodes expose canvas, camera, and screen in addition to system.run. Local browser automation.
Keep the Gateway on a VPS, but run Chrome locally and relay control with the Chrome extension + a node host on the laptop. SSH is fine for ad‑hoc shell access, but nodes are simpler for ## Page 238 ongoing agent workflows and device automation. Docs: Nodes, Nodes CLI, Chrome extension. Should I install on a second laptop or just add a node If you only need local tools (screen/camera/exec) on the second laptop, add it as a node.
That keeps a single Gateway and avoids duplicated config. Local node tools are currently ## Page 239 macOS-only, but we plan to extend them to other OSes. Install a second Gateway only when you need hard isolation or two fully separate bots. Docs: Nodes, Nodes CLI, Multiple gateways.
Do nodes run a gateway service No. Only one gateway should run per host unless you intentionally run isolated profiles (see ## Page 240 Multiple gateways). Nodes are peripherals that connect to the gateway (iOS/Android nodes, or macOS “node mode” in the menubar app). For headless node hosts and CLI control, see Node host CLI.
A full restart is required for , discovery, and canvasHost changes. gateway Is there an API RPC way to apply config Yes. config.apply validates + writes the full config and restarts the Gateway as part of the operation. ## Page 241 configapply wiped my config How do I recover and avoid this config.apply replaces the entire config.
If you send a partial object, everything else is removed. Recover: Restore from backup (git or a copied ~/.clawdbot/moltbot.json). If you have no backup, re-run moltbot doctor and reconfigure channels/models. If this was unexpected, file a bug and include your last known config or any backup.
Page 242 A local coding agent can often reconstruct a working config from logs or history. Avoid it: Use moltbot config set for small changes. Use moltbot configure for interactive edits. Docs: Config, Configure, Doctor.
Whats a minimal sane config for a first install None { agents: { defaults: { workspace: "~/clawd" } }, channels: { whatsapp: { allowFrom: ["+15555550123"] } } ## Page 243 } This sets your workspace and restricts who can trigger the bot. How do I set up Tailscale on a VPS and connect from my Mac Minimal steps: 1.Install + login on the VPS None 2. curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up ## Page 244 3. 4.Install + login on your Mac Use the Tailscale app and sign in to the same tailnet. 5.Enable MagicDNS (recommended) In the Tailscale admin console, enable MagicDNS so the VPS has a stable name. 6.Use the tailnet hostname SSH: ssh user@your-vps.tailnet-xxxx.ts.net Gateway WS: ws://your-vps.tailnet-xxxx.ts.net:18789 If you want the Control UI without SSH, use Tailscale Serve on the VPS: ## Page 245 None moltbot gateway --tailscale serve This keeps the gateway bound to loopback and exposes HTTPS via Tailscale.
See Tailscale. How do I connect a Mac node to a remote Gateway Tailscale Serve Serve exposes the Gateway Control UI + WS. Nodes connect over the same Gateway WS endpoint. Recommended setup: ## Page 246 1.Make sure the VPS + Mac are on the same tailnet.
2.Use the macOS app in Remote mode (SSH target can be the tailnet hostname). The app will tunnel the Gateway port and connect as a node. 3.Approve the node on the gateway: None 4. moltbot nodes pending moltbot nodes approve 5. Docs: Gateway protocol, Discovery, macOS remote mode. ## Page 247 Env vars and .env loading How does Moltbot load environment variables Moltbot reads env vars from the parent process (shell, launchd/systemd, CI, etc.) and additionally loads: .env from the current working directory a global fallback .env from ~/.clawdbot/.env (aka $CLAWDBOT_STATE_DIR/.env) Neither .env file overrides existing env vars.
Page 248 You can also define inline env vars in config (applied only if missing from the process env): None { env: { OPENROUTER_API_KEY: "sk-or-...", vars: { GROQ_API_KEY: "gsk-..." } } } See /environment for full precedence and sources. ## Page 249 I started the Gateway via the service and my env vars disappeared What now Two common fixes: 1.Put the missing keys in ~/.clawdbot/.env so they’re picked up even when the service doesn’t inherit your shell env. 2.Enable shell import (opt‑in convenience): None { env: { shellEnv: { enabled: true, timeoutMs: 15000 } } } ## Page 250 This runs your login shell and imports only missing expected keys (never overrides). Env var equivalents: CLAWDBOT_LOAD_SHELL_ENV=1, .
CLAWDBOT_SHELL_ENV_TIMEOUT_MS=15000 I set COPILOTGITHUBTOKEN but models status shows Shell env off Why moltbot models status reports whether shell env import is enabled. “Shell env: off” does not mean your env vars are missing - it just means ## Page 251 Moltbot won’t load your login shell automatically. If the Gateway runs as a service (launchd/systemd), it won’t inherit your shell environment. Fix by doing one of these: 1.Put the token in ~/.clawdbot/.env: None 2. COPILOT_GITHUB_TOKEN=...
3. 4.Or enable shell import (env.shellEnv.enabled: true). ## Page 252 5.Or add it to your config env block (applies only if missing).
Then restart the gateway and recheck: None moltbot models status Copilot tokens are read from COPILOT_GITHUB_TOKEN (also GH_TOKEN / GITHUB_TOKEN). See /concepts/model-providers and /environment. ## Page 253 Sessions & multiple chats How do I start a fresh conversation Send /new or /reset as a standalone message. See Session management.
Do sessions reset automatically if I never send new Yes. Sessions expire after session.idleMinutes (default 60). The next message starts a fresh session id for that chat key.
This does not delete transcripts - it just starts a new session. ## Page 254 None { session: { idleMinutes: 240 } } Is there a way to make a team of Moltbots one CEO and many agents Yes, via multi-agent routing and sub-agents. You can create one coordinator agent and several worker agents with their own workspaces and models. ## Page 255 That said, this is best seen as a fun experiment.
It is token heavy and often less efficient than using one bot with separate sessions. The typical model we envision is one bot you talk to, with different sessions for parallel work.
That bot can also spawn sub-agents when needed. Docs: Multi-agent routing, Sub-agents, Agents CLI. Why did context get truncated midtask How do I prevent it ## Page 256 Session context is limited by the model window. Long chats, large tool outputs, or many files can trigger compaction or truncation.
What helps: Ask the bot to summarize the current state and write it to a file. Use /compact before long tasks, and /new when switching topics. Keep important context in the workspace and ask the bot to read it back. Use sub-agents for long or parallel work so the main chat stays smaller.
Page 257 Pick a model with a larger context window if this happens often. How do I completely reset Moltbot but keep it installed Use the reset command: None moltbot reset Non-interactive full reset: None moltbot reset --scope full --yes --non-interactive ## Page 258 Then re-run onboarding: None moltbot onboard --install-daemon Notes: The onboarding wizard also offers Reset if it sees an existing config. See Wizard. If you used profiles (--profile / CLAWDBOT_PROFILE), reset each state dir (defaults are ~/.clawdbot-).
Dev reset: moltbot gateway --dev --reset (dev-only; wipes dev config + credentials + sessions + workspace). ## Page 259 Im getting context too large errors how do I reset or compact Use one of these: Compact (keeps the conversation but summarizes older turns): None /compact or /compact to guide the summary. Reset (fresh session ID for the same chat key): None /new /reset ## Page 260 If it keeps happening: Enable or tune session pruning (agents.defaults.contextPruning) to trim old tool output. Use a model with a larger context window.
Docs: Compaction, Session pruning, Session management. Why am I seeing LLM request rejected messagesNcontentXtooluseinput Field required ## Page 261 This is a provider validation error: the model emitted a tool_use block without the required input. It usually means the session history is stale or corrupted (often after long threads or a tool/schema change). Fix: start a fresh session with /new (standalone message).
Why am I getting heartbeat messages every 30 minutes Heartbeats run every 30m by default. Tune or disable them: ## Page 262 None { agents: { defaults: { heartbeat: { every: "2h" // or "0m" to disable } } } } If HEARTBEAT.md exists but is effectively empty (only blank lines and markdown headers like # Heading ), Moltbot skips the heartbeat run to save API calls. If the file is missing, the heartbeat ## Page 263 still runs and the model decides what to do. Per-agent overrides use .
Docs: Heartbeat. agents.list[].heartbeat Do I need to add a bot account to a WhatsApp group No. Moltbot runs on your own account, so if you’re in the group, Moltbot can see it. By default, group replies are blocked until you allow senders (groupPolicy: "allowlist").
Page 264 If you want only you to be able to trigger group replies: None { channels: { whatsapp: { groupPolicy: "allowlist", groupAllowFrom: ["+15551234567"] } } } How do I get the JID of a WhatsApp group Option 1 (fastest): tail logs and send a test message in the group: ## Page 265 None moltbot logs --follow --json Look for chatId (or from) ending in @g.us, like: 1234567890-1234567890@g.us. Option 2 (if already configured/allowlisted): list groups from config: None moltbot directory groups list --channel whatsapp Docs: WhatsApp, Directory, Logs. ## Page 266 Why doesnt Moltbot reply in a group Two common causes: Mention gating is on (default). You must @mention the bot (or match mentionPatterns).
You configured channels.whatsapp.groups without "*" and the group isn’t allowlisted. See Groups and Group messages. Do groupsthreads share context with DMs Direct chats collapse to the main session by default. ## Page 267 Groups/channels have their own session keys, and Telegram topics / Discord threads are separate sessions.
See Groups and Group messages. How many workspaces and agents can I create No hard limits. Dozens (even hundreds) are fine, but watch for: Disk growth: sessions + transcripts live under ~/.clawdbot/agents//sessions/. ## Page 268 Token cost: more agents means more concurrent model usage.
Ops overhead: per-agent auth profiles, workspaces, and channel routing. Tips: Keep one active workspace per agent (agents.defaults.workspace). Prune old sessions (delete JSONL or store entries) if disk grows. Use moltbot doctor to spot stray workspaces and profile mismatches.
Can I run multiple bots or chats at the same time Slack and how should I set that up ## Page 269 Yes. Use Multi‑Agent Routing to run multiple isolated agents and route inbound messages by channel/account/peer. Slack is supported as a channel and can be bound to specific agents. Browser access is powerful but not “do anything a human can” - anti‑bot, CAPTCHAs, and MFA can still block automation.
For the most reliable browser control, use the Chrome extension relay on the machine that runs the browser (and keep the Gateway anywhere). ## Page 270 Best‑practice setup: Always‑on Gateway host (VPS/Mac mini). One agent per role (bindings). Slack channel(s) bound to those agents.
Local browser via extension relay (or a node) when needed. Docs: Multi‑Agent Routing, Slack, Browser, Chrome extension, Nodes. Models: defaults, selection, aliases, switching What is the default model ## Page 271 Moltbot’s default model is whatever you set as: None agents.defaults.model.primary Models are referenced as provider/model (example: anthropic/claude-opus-4-5). If you omit the provider, Moltbot currently assumes anthropic as a temporary deprecation fallback - but you should still explicitly set provider/model.
## Page 272 What model do you recommend Recommended default: anthropic/claude-opus-4-5. Good alternative: anthropic/claude-sonnet-4-5. Reliable (less character): openai/gpt-5.2 - nearly as good as Opus, just less personality. Budget: zai/glm-4.7.
MiniMax M2.1 has its own docs: MiniMax and Local models. Rule of thumb: use the best model you can afford for high-stakes work, and a cheaper model for routine chat or summaries. You ## Page 273 can route models per agent and use sub-agents to parallelize long tasks (each sub-agent consumes tokens). See Models and Sub-agents.
Strong warning: weaker/over-quantized models are more vulnerable to prompt injection and unsafe behavior. See Security. More context: Models. ## Page 274 Can I use selfhosted models llamacpp vLLM Ollama Yes.
If your local server exposes an OpenAI-compatible API, you can point a custom provider at it. Ollama is supported directly and is the easiest path.
Security note: smaller or heavily quantized models are more vulnerable to prompt injection. We strongly recommend large models for any bot that can use tools. If you still want small models, enable sandboxing and strict tool allowlists. ## Page 275 Docs: Ollama, Local models, Model providers, Security, Sandboxing.
How do I switch models without wiping my config Use model commands or edit only the model fields. Avoid full config replaces. Safe options: /model in chat (quick, per-session) moltbot models set ... (updates just model config) moltbot configure --section models (interactive) edit agents.defaults.model in ~/.clawdbot/moltbot.json ## Page 276 Avoid config.apply with a partial object unless you intend to replace the whole config.
If you did overwrite config, restore from backup or re-run moltbot doctor to repair. Docs: Models, Configure, Config, Doctor. What do Clawd Flawd and Krill use for models Clawd + Flawd: Anthropic Opus (anthropic/claude-opus-4-5) - see Anthropic. Krill: MiniMax M2.1 (minimax/MiniMax-M2.1) - see MiniMax.
Page 277 How do I switch models on the fly without restarting Use the /model command as a standalone message: None /model sonnet /model haiku /model opus /model gpt /model gpt-mini /model gemini /model gemini-flash You can list available models with /model, /model list, or /model status. ## Page 278 /model (and /model list) shows a compact, numbered picker. Select by number: None /model 3 You can also force a specific auth profile for the provider (per session): None /model opus@anthropic:default /model opus@anthropic:work ## Page 279 Tip: /model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next. It also shows the configured provider endpoint (baseUrl) and API mode (api) when available.
How do I unpin a profile I set with profile Re-run /model without the @profile suffix: None /model anthropic/claude-opus-4-5 ## Page 280 If you want to return to the default, pick it from /model (or send ). Use /model status to /model <default provider/model> confirm which auth profile is active. Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding Yes. Set one as default and switch as needed: Quick switch (per session): /model gpt-5.2 for daily tasks, /model gpt-5.2-codex for coding.
Default + switch: set agents.defaults.model.primary to openai-codex/gpt-5.2, then ## Page 281 switch to openai-codex/gpt-5.2-codex when coding (or the other way around). Sub-agents: route coding tasks to sub-agents with a different default model. See Models and Slash commands. Why do I see Model is not allowed and then no reply If agents.defaults.models is set, it becomes the allowlist for /model and any session overrides.
Choosing a model that isn’t in that list returns: ## Page 282 None Model "provider/model" is not allowed. Use /model to list available models.
That error is returned instead of a normal reply. Fix: add the model to agents.defaults.models, remove the allowlist, or pick a model from /model list. Why do I see Unknown model minimaxMiniMaxM21 This means the provider isn’t configured (no MiniMax provider ## Page 283 config or auth profile was found), so the model can’t be resolved. A fix for this detection is in 2026.1.12 (unreleased at the time of writing).
Fix checklist: 1.Upgrade to 2026.1.12 (or run from source main), then restart the gateway. 2.Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key exists in env/auth profiles so the provider can be injected. ## Page 284 3.Use the exact model id (case‑sensitive): minimax/MiniMax-M2.1 or minimax/MiniMax-M2.1-lightning. 4.Run: None 5. moltbot models list 6. 7. and pick from the list (or /model list in chat).
See MiniMax and Models. Can I use MiniMax as my default and OpenAI for complex tasks ## Page 285 Yes. Use MiniMax as the default and switch models per session when needed. Fallbacks are for errors, not “hard tasks,” so use /model or a separate agent.
Option A: switch per session None { env: { MINIMAX_API_KEY: "sk-...", OPENAI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" }, models: { "minimax/MiniMax-M2.1": { alias: "minimax" }, "openai/gpt-5.2": { alias: "gpt" } } } ## Page 286 } } Then: None /model gpt Option B: separate agents Agent A default: MiniMax Agent B default: OpenAI Route by agent or use /agent to switch Docs: Models, Multi-Agent Routing, MiniMax, OpenAI. ## Page 287 Are opus sonnet gpt builtin shortcuts Yes. Moltbot ships a few default shorthands (only applied when the model exists in agents.defaults.models): opus → anthropic/claude-opus-4-5 sonnet → anthropic/claude-sonnet-4-5 gpt → openai/gpt-5.2 gpt-mini → openai/gpt-5-mini gemini → google/gemini-3-pro-preview gemini-flash → google/gemini-3-flash-preview If you set your own alias with the same name, your value wins. ## Page 288 How do I defineoverride model shortcuts aliases Aliases come from agents.defaults.models..alias .
Example: None { agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" }, models: { "anthropic/claude-opus-4-5": { alias: "opus" }, "anthropic/claude-sonnet-4-5": { alias: "sonnet" }, "anthropic/claude-haiku-4-5": { alias: "haiku" } } } } } ## Page 289 Then /model sonnet (or / when supported) resolves to that model ID. How do I add models from other providers like OpenRouter or ZAI OpenRouter (pay‑per‑token; many models): None { agents: { defaults: { model: { primary: "openrouter/anthropic/claude-sonnet-4-5" }, models: { "openrouter/anthropic/claude-sonnet-4-5": {} } ## Page 290 } }, env: { OPENROUTER_API_KEY: "sk-or-..." } } Z.AI (GLM models): None { agents: { defaults: { model: { primary: "zai/glm-4.7" }, models: { "zai/glm-4.7": {} } } }, env: { ZAI_API_KEY: "..." } } ## Page 291 If you reference a provider/model but the required provider key is missing, you’ll get a runtime auth error (e.g. No API key found for provider "zai"). No API key found for provider after adding a new agent This usually means the new agent has an empty auth store.
Auth is per-agent and stored in: None ~/.clawdbot/agents//agent/auth-profiles.json Fix options: ## Page 292 Run moltbot agents add and configure auth during the wizard. Or copy auth-profiles.json from the main agent’s agentDir into the new agent’s agentDir. Do not reuse agentDir across agents; it causes auth/session collisions. Model failover and “All models failed” How does failover work Failover happens in two stages: 1.Auth profile rotation within the same provider.
Page 293 2.Model fallback to the next model in agents.defaults.model.fallbacks. Cooldowns apply to failing profiles (exponential backoff), so Moltbot can keep responding even when a provider is rate‑limited or temporarily failing. What does this error mean None No credentials found for profile "anthropic:default" ## Page 294 It means the system attempted to use the auth profile ID anthropic:default, but could not find credentials for it in the expected auth store. Fix checklist for No credentials found for profile anthropicdefault Confirm where auth profiles live (new vs legacy paths) Current: ~/.clawdbot/agents//agent/auth-profiles.json Legacy: ~/.clawdbot/agent/* (migrated by moltbot doctor) Confirm your env var is loaded by the Gateway ## Page 295 If you set ANTHROPIC_API_KEY in your shell but run the Gateway via systemd/launchd, it may not inherit it.
Put it in ~/.clawdbot/.env or enable env.shellEnv. Make sure you’re editing the correct agent Multi‑agent setups mean there can be multiple auth-profiles.json files. Sanity‑check model/auth status Use moltbot models status to see configured models and whether providers are authenticated. Fix checklist for No credentials found for profile anthropic ## Page 296 This means the run is pinned to an Anthropic auth profile, but the Gateway can’t find it in its auth store.
Use a setup-token Run claude setup-token, then paste it with moltbot models auth setup-token --provider anthropic. If the token was created on another machine, use moltbot models auth paste-token --provider anthropic. If you want to use an API key instead Put ANTHROPIC_API_KEY in ~/.clawdbot/.env on the gateway host. Clear any pinned order that forces a missing profile: None ## Page 297 moltbot models auth order clear --provider anthropic Confirm you’re running commands on the gateway host In remote mode, auth profiles live on the gateway machine, not your laptop.
Why did it also try Google Gemini and fail If your model config includes Google Gemini as a fallback (or you switched to a Gemini shorthand), Moltbot will try it during model fallback. If you ## Page 298 haven’t configured Google credentials, you’ll see No API key found for . provider "google" Fix: either provide Google auth, or remove/avoid Google models in agents.defaults.model.fallbacks / aliases so fallback doesn’t route there. LLM request rejected message thinking signature required google antigravity Cause: the session history contains thinking blocks without signatures (often from an ## Page 299 aborted/partial stream).
Google Antigravity requires signatures for thinking blocks. Fix: Moltbot now strips unsigned thinking blocks for Google Antigravity Claude. If it still appears, start a new session or set /thinking off for that agent. Auth profiles: what they are and how to manage them Related: /concepts/oauth (OAuth flows, token storage, multi-account patterns) ## Page 300 What is an auth profile An auth profile is a named credential record (OAuth or API key) tied to a provider.
Profiles live in: None ~/.clawdbot/agents//agent/auth-profiles.json What are typical profile IDs ## Page 301 Moltbot uses provider‑prefixed IDs like: anthropic:default (common when no email identity exists) anthropic: for OAuth identities custom IDs you choose (e.g. anthropic:work) Can I control which auth profile is tried first Yes. Config supports optional metadata for profiles and an ordering per provider (auth.order.).
This does not store secrets; it ## Page 302 maps IDs to provider/mode and sets rotation order. Moltbot may temporarily skip a profile if it’s in a short cooldown (rate limits/timeouts/auth failures) or a longer disabled state (billing/insufficient credits). To inspect this, run moltbot models status --json and check auth.unusableProfiles. Tuning: .
auth.cooldowns.billingBackoffHours* You can also set a per-agent order override (stored in that agent’s auth-profiles.json) via the CLI: ## Page 303 None # Defaults to the configured default agent (omit --agent) moltbot models auth order get --provider anthropic # Lock rotation to a single profile (only try this one) moltbot models auth order set --provider anthropic anthropic:default # Or set an explicit order (fallback within provider) moltbot models auth order set --provider anthropic anthropic:work anthropic:default # Clear override (fall back to config auth.order / round-robin) moltbot models auth order clear --provider anthropic To target a specific agent: None moltbot models auth order set --provider anthropic --agent main anthropic:default ## Page 304 OAuth vs API key whats the difference Moltbot supports both: OAuth often leverages subscription access (where applicable). API keys use pay‑per‑token billing. The wizard explicitly supports Anthropic setup-token and OpenAI Codex OAuth and can store API keys for you. ## Page 305 Gateway: ports, “already running”, and remote mode What port does the Gateway use gateway.port controls the single multiplexed port for WebSocket + HTTP (Control UI, hooks, etc.).
Precedence: None --port > CLAWDBOT_GATEWAY_PORT > gateway.port > default 18789 ## Page 306 Why does moltbot gateway status say Runtime running but RPC probe failed Because “running” is the supervisor’s view (launchd/systemd/schtasks). The RPC probe is the CLI actually connecting to the gateway WebSocket and calling status. Use moltbot gateway status and trust these lines: Probe target: (the URL the probe actually used) Listening: (what’s actually bound on the port) ## Page 307 Last gateway error: (common root cause when the process is alive but the port isn’t listening) Why does moltbot gateway status show Config cli and Config service different You’re editing one config file while the service is running another (often a --profile / CLAWDBOT_STATE_DIR mismatch). Fix: None moltbot gateway install --force ## Page 308 Run that from the same --profile / environment you want the service to use.
What does another gateway instance is already listening mean Moltbot enforces a runtime lock by binding the WebSocket listener immediately on startup (default ). If the bind fails with ws://127.0.0.1:18789 , it throws GatewayLockError indicating EADDRINUSE ## Page 309 another instance is already listening. Fix: stop the other instance, free the port, or run with moltbot gateway . --port How do I run Moltbot in remote mode client connects to a Gateway elsewhere Set gateway.mode: "remote" and point to a remote WebSocket URL, optionally with a token/password: None ## Page 310 { gateway: { mode: "remote", remote: { url: "ws://gateway.tailnet:18789", token: "your-token", password: "your-password" } } } Notes: moltbot gateway only starts when gateway.mode is local (or you pass the override flag).
The macOS app watches the config file and switches modes live when these values change. ## Page 311 The Control UI says unauthorized or keeps reconnecting What now Your gateway is running with auth enabled (gateway.auth.*), but the UI is not sending the matching token/password. Facts (from code): The Control UI stores the token in browser localStorage key moltbot.control.settings.v1. The UI can import ?token=...
(and/or ?password=...) once, then strips it from the URL. Fix: ## Page 312 Fastest: moltbot dashboard (prints + copies tokenized link, tries to open; shows SSH hint if headless). If you don’t have a token yet: moltbot doctor --generate-gateway-token. If remote, tunnel first: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/?token=....
Set gateway.auth.token (or CLAWDBOT_GATEWAY_TOKEN) on the gateway host. In the Control UI settings, paste the same token (or refresh with a one-time ?token=... link). Still stuck?
Run moltbot status --all and follow Troubleshooting. See Dashboard for auth details. ## Page 313 I set gatewaybind tailnet but it cant bind nothing listens tailnet bind picks a Tailscale IP from your network interfaces (100.64.0.0/10). If the machine isn’t on Tailscale (or the interface is down), there’s nothing to bind to.
Fix: Start Tailscale on that host (so it has a 100.x address), or Switch to gateway.bind: "loopback" / "lan".
Note: tailnet is explicit. auto prefers loopback; use gateway.bind: "tailnet" when you want a tailnet-only bind. ## Page 314 Can I run multiple Gateways on the same host Usually no - one Gateway can run multiple messaging channels and agents. Use multiple Gateways only when you need redundancy (ex: rescue bot) or hard isolation.
Yes, but you must isolate: CLAWDBOT_CONFIG_PATH (per‑instance config) CLAWDBOT_STATE_DIR (per‑instance state) agents.defaults.workspace (workspace isolation) gateway.port (unique ports) ## Page 315 Quick setup (recommended): Use moltbot --profile … per instance (auto-creates ~/.clawdbot-). Set a unique gateway.port in each profile config (or pass --port for manual runs). Install a per-profile service: moltbot --profile gateway install. Profiles also suffix service names (com.clawdbot., moltbot-gateway-.service, ).
Full guide: Moltbot Gateway () Multiple gateways. What does invalid handshake code 1008 mean ## Page 316 The Gateway is a WebSocket server, and it expects the very first message to be a connect frame. If it receives anything else, it closes the connection with code 1008 (policy violation). Common causes: You opened the HTTP URL in a browser (http://...) instead of a WS client.
You used the wrong port or path. A proxy or tunnel stripped auth headers or sent a non‑Gateway request. Quick fixes: ## Page 317 1.Use the WS URL: ws://:18789 (or wss://... if HTTPS).
2.Don’t open the WS port in a normal browser tab. 3.If auth is on, include the token/password in the connect frame. If you’re using the CLI or TUI, the URL should look like: None moltbot tui --url ws://:18789 --token Protocol details: Gateway protocol. ## Page 318 Logging and debugging Where are logs File logs (structured): None /tmp/moltbot/moltbot-YYYY-MM-DD.log You can set a stable path via .
File log level is logging.file controlled by logging.level. Console verbosity is controlled by --verbose and logging.consoleLevel. Fastest log tail: ## Page 319 None moltbot logs --follow Service/supervisor logs (when the gateway runs via launchd/systemd): macOS: $CLAWDBOT_STATE_DIR/logs/gateway.log and gateway.err.log (default: ~/.clawdbot/logs/...; profiles use ~/.clawdbot-/logs/...) Linux: journalctl --user -u moltbot-gateway[-].service -n 200 --no-pager Windows: schtasks /Query /TN "Moltbot Gateway ()" /V /FO LIST See Troubleshooting for more. ## Page 320 How do I startstoprestart the Gateway service Use the gateway helpers: None moltbot gateway status moltbot gateway restart If you run the gateway manually, moltbot gateway --force can reclaim the port.
See Gateway. I closed my terminal on Windows how do I restart Moltbot There are two Windows install modes: ## Page 321 1) WSL2 (recommended): the Gateway runs inside Linux. Open PowerShell, enter WSL, then restart: None wsl moltbot gateway status moltbot gateway restart If you never installed the service, start it in the foreground: None ## Page 322 moltbot gateway run 2) Native Windows (not recommended): the Gateway runs directly in Windows. Open PowerShell and run: None moltbot gateway status moltbot gateway restart If you run it manually (no service), use: ## Page 323 None moltbot gateway run Docs: Windows (WSL2), Gateway service runbook.
The Gateway is up but replies never arrive What should I check Start with a quick health sweep: None moltbot status moltbot models status moltbot channels status ## Page 324 moltbot logs --follow Common causes: Model auth not loaded on the gateway host (check models status). Channel pairing/allowlist blocking replies (check channel config + logs). WebChat/Dashboard is open without the right token. If you are remote, confirm the tunnel/Tailscale connection is up and that the Gateway WebSocket is reachable.
Page 325 Docs: Channels, Troubleshooting, Remote access. Disconnected from gateway no reason what now This usually means the UI lost the WebSocket connection. Check: 1.Is the Gateway running? moltbot gateway status 2.Is the Gateway healthy?
moltbot status 3.Does the UI have the right token? moltbot dashboard 4.If remote, is the tunnel/Tailscale link up?
Then tail logs: ## Page 326 None moltbot logs --follow Docs: Dashboard, Remote access, Troubleshooting. Telegram setMyCommands fails with network errors What should I check Start with logs and channel status: None moltbot channels status ## Page 327 moltbot channels logs --channel telegram If you are on a VPS or behind a proxy, confirm outbound HTTPS is allowed and DNS works. If the Gateway is remote, make sure you are looking at logs on the Gateway host. Docs: Telegram, Channel troubleshooting.
TUI shows no output What should I check ## Page 328 First confirm the Gateway is reachable and the agent can run: None moltbot status moltbot models status moltbot logs --follow In the TUI, use /status to see the current state. If you expect replies in a chat channel, make sure delivery is enabled (/deliver on). Docs: TUI, Slash commands. ## Page 329 How do I completely stop then start the Gateway If you installed the service: None moltbot gateway stop moltbot gateway start This stops/starts the supervised service (launchd on macOS, systemd on Linux).
Use this when the Gateway runs in the background as a daemon. If you’re running in the foreground, stop with Ctrl‑C, then: ## Page 330 None moltbot gateway run Docs: Gateway service runbook. ELI5 moltbot gateway restart vs moltbot gateway moltbot gateway restart: restarts the background service (launchd/systemd). moltbot gateway: runs the gateway in the foreground for this terminal session.
If you installed the service, use the gateway commands. Use moltbot gateway ## Page 331 when you want a one-off, foreground run. Whats the fastest way to get more details when something fails Start the Gateway with --verbose to get more console detail.
Then inspect the log file for channel auth, model routing, and RPC errors. Media & attachments My skill generated an imagePDF but nothing was sent ## Page 332 Outbound attachments from the agent must include a MEDIA: line (on its own line). See Moltbot assistant setup and Agent send. CLI sending: None moltbot message send --target +15555550123 --message "Here you go" --media /path/to/file.png Also check: The target channel supports outbound media and isn’t blocked by allowlists.
Page 333 The file is within the provider’s size limits (images are resized to max 2048px). See Images. Security and access control Is it safe to expose Moltbot to inbound DMs Treat inbound DMs as untrusted input. Defaults are designed to reduce risk: Default behavior on DM‑capable channels is pairing: ## Page 334 Unknown senders receive a pairing code; the bot does not process their message.
Approve with: moltbot pairing approve
Pending requests are capped at 3 per channel; check moltbot pairing list if a code didn’t arrive. Opening DMs publicly requires explicit opt‑in (dmPolicy: "open" and allowlist "*"). Run moltbot doctor to surface risky DM policies. Is prompt injection only a concern for public bots ## Page 335 No.
Prompt injection is about untrusted content, not just who can DM the bot. If your assistant reads external content (web search/fetch, browser pages, emails, docs, attachments, pasted logs), that content can include instructions that try to hijack the model.
This can happen even if you are the only sender. The biggest risk is when tools are enabled: the model can be tricked into exfiltrating context ## Page 336 or calling tools on your behalf. Reduce the blast radius by: using a read-only or tool-disabled “reader” agent to summarize untrusted content keeping web_search / web_fetch / browser off for tool-enabled agents sandboxing and strict tool allowlists Details: Security. Should my bot have its own email GitHub account or phone number Yes, for most setups.
Isolating the bot with separate accounts ## Page 337 and phone numbers reduces the blast radius if something goes wrong.
This also makes it easier to rotate credentials or revoke access without impacting your personal accounts. Start small. Give access only to the tools and accounts you actually need, and expand later if required. Docs: Security, Pairing.
## Page 338 Can I give it autonomy over my text messages and is that safe We do not recommend full autonomy over your personal messages. The safest pattern is: Keep DMs in pairing mode or a tight allowlist. Use a separate number or account if you want it to message on your behalf. Let it draft, then approve before sending.
If you want to experiment, do it on a dedicated account and keep it isolated. See Security. ## Page 339 Can I use cheaper models for personal assistant tasks Yes, if the agent is chat-only and the input is trusted. Smaller tiers are more susceptible to instruction hijacking, so avoid them for tool-enabled agents or when reading untrusted content.
If you must use a smaller model, lock down tools and run inside a sandbox. See Security. I ran start in Telegram but didnt get a pairing code ## Page 340 Pairing codes are sent only when an unknown sender messages the bot and dmPolicy: "pairing" is enabled. /start by itself doesn’t generate a code.
Check pending requests: None moltbot pairing list telegram If you want immediate access, allowlist your sender id or set dmPolicy: "open" for that account. ## Page 341 WhatsApp will it message my contacts How does pairing work No. Default WhatsApp DM policy is pairing. Unknown senders only get a pairing code and their message is not processed.
Moltbot only replies to chats it receives or to explicit sends you trigger. Approve pairing with: None moltbot pairing approve whatsapp ## Page 342 List pending requests: None moltbot pairing list whatsapp Wizard phone number prompt: it’s used to set your allowlist/owner so your own DMs are permitted. It’s not used for auto-sending. If you run on your personal WhatsApp number, use that number and enable channels.whatsapp.selfChatMode.
## Page 343 Chat commands, aborting tasks, and “it won’t stop” How do I stop internal system messages from showing in chat Most internal or tool messages only appear when verbose or reasoning is enabled for that session. Fix in the chat where you see it: None /verbose off /reasoning off ## Page 344 If it is still noisy, check the session settings in the Control UI and set verbose to inherit.
Also confirm you are not using a bot profile with verboseDefault set to on in config. Docs: Thinking and verbose, Security. How do I stopcancel a running task Send any of these as a standalone message (no slash): ## Page 345 None stop abort esc wait exit interrupt These are abort triggers (not slash commands). For background processes (from the exec tool), you can ask the agent to run: None ## Page 346 process action:kill sessionId:XXX Slash commands overview: see Slash commands.
Most commands must be sent as a standalone message that starts with /, but a few shortcuts (like /status) also work inline for allowlisted senders. How do I send a Discord message from Telegram Crosscontext messaging denied ## Page 347 Moltbot blocks cross‑provider messaging by default. If a tool call is bound to Telegram, it won’t send to Discord unless you explicitly allow it. Enable cross‑provider messaging for the agent: None { agents: { defaults: { tools: { message: { crossContext: { allowAcrossProviders: true, ## Page 348 marker: { enabled: true, prefix: "[from {channel}] " } } } } } } } Restart the gateway after editing config.
If you only want this for a single agent, set it under agents.list[].tools.message instead. Why does it feel like the bot ignores rapidfire messages ## Page 349 Queue mode controls how new messages interact with an in‑flight run. Use /queue to change modes: steer - new messages redirect the current task followup - run messages one at a time collect - batch messages and reply once (default) steer-backlog - steer now, then process backlog interrupt - abort current run and start fresh You can add options like debounce:2s cap:25 drop:summarize for followup modes. ## Page 350 Answer the exact question from the screenshot/chat log Q: “What’s the default model for Anthropic with an API key?” A: In Moltbot, credentials and model selection are separate.
Setting ANTHROPIC_API_KEY (or storing an Anthropic API key in auth profiles) enables authentication, but the actual default model is whatever you configure in agents.defaults.model.primary (for example, anthropic/claude-sonnet-4-5 or anthropic/claude-opus-4-5). If you see No credentials found for profile "anthropic:default", it ## Page 351 means the Gateway couldn’t find Anthropic credentials in the expected auth-profiles.json for the agent that’s running.
Still stuck? Ask in Discord or open a GitHub discussion. Troubleshooting Install Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, multi-agent, OAuth/API keys, model failover). For runtime diagnostics, see Troubleshooting.
For the full config reference, see Configuration. ## Page 352 Table of contents Quick start and first-run setup Im stuck whats the fastest way to get unstuck? What’s the recommended way to install and set up Moltbot? How do I open the dashboard after onboarding?
How do I authenticate the dashboard (token) on localhost vs remote? What runtime do I need? Does it run on Raspberry Pi? Any tips for Raspberry Pi installs?
It is stuck on “wake up my friend” / onboarding will not hatch. What now? Can I migrate my setup to a new machine (Mac mini) without redoing onboarding? Where do I see what’s new in the latest version?
I can’t access docs.molt.bot (SSL error). What now? What’s the difference between stable and beta? How do I install the beta version, and what’s the difference between beta and dev?
How do I try the latest bits? How long does install and onboarding usually take? Installer stuck? How do I get more feedback?
Windows install says git not found or moltbot not recognized The docs didn’t answer my question - how do I get a better answer? How do I install Moltbot on Linux? How do I install Moltbot on a VPS? Where are the cloud/VPS install guides?
Can I ask Clawd to update itself? What does the onboarding wizard actually do? Do I need a Claude or OpenAI subscription to run this? Can I use Claude Max subscription without an API key How does Anthropic “setup-token” auth work?
Where do I find an Anthropic setup-token? Do you support Claude subscription auth (Claude Code OAuth)? Why am I seeing HTTP 429: rate_limit_error from Anthropic? Is AWS Bedrock supported?
How does Codex auth work? Do you support OpenAI subscription auth (Codex OAuth)? How do I set up Gemini CLI OAuth Is a local model OK for casual chats? ## Page 353 How do I keep hosted model traffic in a specific region?
Do I have to buy a Mac Mini to install this? Do I need a Mac mini for iMessage support? If I buy a Mac mini to run Moltbot, can I connect it to my MacBook Pro? Can I use Bun?
Telegram: what goes in allowFrom? Can multiple people use one WhatsApp number with different Moltbots? Can I run a “fast chat” agent and an “Opus for coding” agent? Does Homebrew work on Linux?
What’s the difference between the hackable (git) install and npm install? Can I switch between npm and git installs later? Should I run the Gateway on my laptop or a VPS? How important is it to run Moltbot on a dedicated machine?
What are the minimum VPS requirements and recommended OS? Can I run Moltbot in a VM and what are the requirements What is Moltbot? What is Moltbot, in one paragraph? What’s the value proposition?
I just set it up what should I do first What are the top five everyday use cases for Moltbot Can Moltbot help with lead gen outreach ads and blogs for a SaaS What are the advantages vs Claude Code for web development? Skills and automation How do I customize skills without keeping the repo dirty? Can I load skills from a custom folder? How can I use different models for different tasks?
The bot freezes while doing heavy work. How do I offload that? Cron or reminders do not fire. What should I check?
How do I install skills on Linux? Can Moltbot run tasks on a schedule or continuously in the background? Can I run Apple/macOS-only skills from Linux? Do you have a Notion or HeyGen integration?
How do I install the Chrome extension for browser takeover? Sandboxing and memory Is there a dedicated sandboxing doc? How do I bind a host folder into the sandbox? ## Page 354 How does memory work?
Memory keeps forgetting things. How do I make it stick? Does memory persist forever? What are the limits?
Does semantic memory search require an OpenAI API key? Where things live on disk Is all data used with Moltbot saved locally? Where does Moltbot store its data? Where should AGENTS.md / SOUL.md / USER.md / MEMORY.md live?
What’s the recommended backup strategy? How do I completely uninstall Moltbot? Can agents work outside the workspace? I’m in remote mode - where is the session store?
Config basics What format is the config? Where is it? I set gateway.bind: "lan" (or "tailnet") and now nothing listens / the UI says unauthorized Why do I need a token on localhost now? Do I have to restart after changing config?
How do I enable web search (and web fetch)? config.apply wiped my config. How do I recover and avoid this? How do I run a central Gateway with specialized workers across devices?
Can the Moltbot browser run headless? How do I use Brave for browser control? Remote gateways + nodes How do commands propagate between Telegram, the gateway, and nodes? How can my agent access my computer if the Gateway is hosted remotely?
Tailscale is connected but I get no replies. What now? Can two Moltbots talk to each other (local + VPS)? Do I need separate VPSes for multiple agents Is there a benefit to using a node on my personal laptop instead of SSH from a VPS?
Do nodes run a gateway service? Is there an API / RPC way to apply config? What’s a minimal “sane” config for a first install? How do I set up Tailscale on a VPS and connect from my Mac?
How do I connect a Mac node to a remote Gateway (Tailscale Serve)? Should I install on a second laptop or just add a node? Env vars and .env loading ## Page 355 How does Moltbot load environment variables? “I started the Gateway via the service and my env vars disappeared.” What now?
I set COPILOT_GITHUB_TOKEN, but models status shows “Shell env: off.” Why? Sessions & multiple chats How do I start a fresh conversation? Do sessions reset automatically if I never send /new? Is there a way to make a team of Moltbots one CEO and many agents Why did context get truncated mid-task?
How do I prevent it? How do I completely reset Moltbot but keep it installed? I’m getting “context too large” errors - how do I reset or compact? Why am I seeing “LLM request rejected: messages.N.content.X.tool_use.input: Field required”?
Why am I getting heartbeat messages every 30 minutes? Do I need to add a “bot account” to a WhatsApp group? How do I get the JID of a WhatsApp group? Why doesn’t Moltbot reply in a group?
Do groups/threads share context with DMs? How many workspaces and agents can I create? Can I run multiple bots or chats at the same time (Slack), and how should I set that up? Models: defaults, selection, aliases, switching What is the “default model”?
What model do you recommend? How do I switch models without wiping my config? Can I use self-hosted models (llama.cpp, vLLM, Ollama)? What do Clawd, Flawd, and Krill use for models?
How do I switch models on the fly (without restarting)? Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding Why do I see “Model … is not allowed” and then no reply? Why do I see “Unknown model: minimax/MiniMax-M2.1”? Can I use MiniMax as my default and OpenAI for complex tasks?
Are opus / sonnet / gpt built‑in shortcuts? How do I define/override model shortcuts (aliases)? How do I add models from other providers like OpenRouter or Z.AI? Model failover and “All models failed” How does failover work?
What does this error mean? ## Page 356 Fix checklist for No credentials found for profile "anthropic:default" Why did it also try Google Gemini and fail? Auth profiles: what they are and how to manage them What is an auth profile? What are typical profile IDs?
Can I control which auth profile is tried first? OAuth vs API key: what’s the difference? Gateway: ports, “already running”, and remote mode What port does the Gateway use? Why does moltbot gateway status say Runtime: running but RPC probe: failed?
Why does moltbot gateway status show Config (cli) and Config (service) different? What does “another gateway instance is already listening” mean? How do I run Moltbot in remote mode (client connects to a Gateway elsewhere)? The Control UI says “unauthorized” (or keeps reconnecting).
What now? I set gateway.bind: "tailnet" but it can’t bind / nothing listens Can I run multiple Gateways on the same host? What does “invalid handshake” / code 1008 mean? Logging and debugging Where are logs?
How do I start/stop/restart the Gateway service? I closed my terminal on Windows - how do I restart Moltbot? The Gateway is up but replies never arrive. What should I check?
“Disconnected from gateway: no reason” - what now? Telegram setMyCommands fails with network errors. What should I check? TUI shows no output.
What should I check? How do I completely stop then start the Gateway? ELI5: moltbot gateway restart vs moltbot gateway What’s the fastest way to get more details when something fails? Media & attachments My skill generated an image/PDF, but nothing was sent Security and access control Is it safe to expose Moltbot to inbound DMs?
Is prompt injection only a concern for public bots? ## Page 357 Should my bot have its own email GitHub account or phone number Can I give it autonomy over my text messages and is that safe Can I use cheaper models for personal assistant tasks? I ran /start in Telegram but didn’t get a pairing code WhatsApp: will it message my contacts? How does pairing work?
Chat commands, aborting tasks, and “it won’t stop” How do I stop internal system messages from showing in chat How do I stop/cancel a running task? How do I send a Discord message from Telegram? (“Cross-context messaging denied”) Why does it feel like the bot “ignores” rapid‑fire messages? First 60 seconds if something’s broken 1.Quick status (first check) None 2. moltbot status 3. 4.Fast local summary: OS + update, gateway/service reachability, agents/sessions, provider config + runtime issues (when gateway is reachable).
5.Pasteable report (safe to share) None 6. ## Page 358 moltbot status --all 7. 8.Read-only diagnosis with log tail (tokens redacted). 9.Daemon + port state None 10. moltbot gateway status 11. 12. Shows supervisor runtime vs RPC reachability, the probe target URL, and which config the service likely used. 13. Deep probes None 14. moltbot status --deep 15. 16. Runs gateway health checks + provider probes (requires a reachable gateway). See Health.
17. Tail the latest log None 18. moltbot logs --follow 19. 20. If RPC is down, fall back to: None 21. ## Page 359 tail -f "$(ls -t /tmp/moltbot/moltbot-*.log | head -1)" 22. 23. File logs are separate from service logs; see Logging and Troubleshooting. 24. Run the doctor (repairs) None 25. moltbot doctor 26. 27. Repairs/migrates config/state + runs health checks. See Doctor. 28. Gateway snapshot None 29. moltbot health --json moltbot health --verbose # shows the target URL + config path on errors 30. 31. Asks the running gateway for a full snapshot (WS-only).
See Health. Quick start and first-run setup ## Page 360 Im stuck whats the fastest way to get unstuck Use a local AI agent that can see your machine.
That is far more effective than asking in Discord, because most “I’m stuck” cases are local config or environment issues that remote helpers cannot inspect. Claude Code: https://www.anthropic.com/claude-code/ OpenAI Codex: https://openai.com/codex/ These tools can read the repo, run commands, inspect logs, and help fix your machine-level setup (PATH, services, permissions, auth files). Give them the full source checkout via the hackable (git) install: None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git This installs Moltbot from a git checkout, so the agent can read the code + docs and reason about the exact version you are running. You can always switch back to stable later by re-running the installer without --install-method git.
Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the necessary commands.
That keeps changes small and easier to audit. ## Page 361 If you discover a real bug or fix, please file a GitHub issue or send a PR: https://github.com/moltbot/moltbot/issues https://github.com/moltbot/moltbot/pulls Start with these commands (share outputs when asking for help): None moltbot status moltbot models status moltbot doctor What they do: moltbot status: quick snapshot of gateway/agent health + basic config. moltbot models status: checks provider auth + model availability. moltbot doctor: validates and repairs common config/state issues.
Other useful CLI checks: moltbot status --all, moltbot logs --follow, moltbot gateway status, moltbot health --verbose. Quick debug loop: First 60 seconds if something’s broken. Install docs: Install, Installer flags, Updating. Whats the recommended way to install and set up Moltbot ## Page 362 The repo recommends running from source and using the onboarding wizard: None curl -fsSL https://molt.bot/install.sh | bash moltbot onboard --install-daemon The wizard can also build UI assets automatically.
After onboarding, you typically run the Gateway on port 18789. From source (contributors/dev): None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm build pnpm ui:build # auto-installs UI deps on first run moltbot onboard If you don’t have a global install yet, run it via pnpm moltbot onboard. How do I open the dashboard after onboarding ## Page 363 The wizard now opens your browser with a tokenized dashboard URL right after onboarding and also prints the full link (with token) in the summary. Keep that tab open; if it didn’t launch, copy/paste the printed URL on the same machine.
Tokens stay local to your host-nothing is fetched from the browser. How do I authenticate the dashboard token on localhost vs remote Localhost (same machine): Open http://127.0.0.1:18789/. If it asks for auth, run moltbot dashboard and use the tokenized link (?token=...). The token is the same value as gateway.auth.token (or CLAWDBOT_GATEWAY_TOKEN) and is stored by the UI after first load.
Not on localhost: Tailscale Serve (recommended): keep bind loopback, run moltbot gateway --tailscale serve, open https:///. If gateway.auth.allowTailscale is true, identity headers satisfy auth (no token). Tailnet bind: run moltbot gateway --bind tailnet --token "", open http://:18789/, paste token in dashboard settings. SSH tunnel: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/?token=...
from moltbot dashboard. See Dashboard and Web surfaces for bind modes and auth details. ## Page 364 What runtime do I need Node >= 22 is required. pnpm is recommended.
Bun is not recommended for the Gateway. Does it run on Raspberry Pi Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it. If you want extra headroom (logs, media, other services), 2GB is recommended, but it’s not a hard minimum.
Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes. Any tips for Raspberry Pi installs ## Page 365 Short version: it works, but expect rough edges. Use a 64-bit OS and keep Node >= 22.
Prefer the hackable (git) install so you can see logs and update fast. Start without channels/skills, then add them one by one. If you hit weird binary issues, it is usually an ARM compatibility problem. Docs: Linux, Install.
It is stuck on wake up my friend onboarding will not hatch What now That screen depends on the Gateway being reachable and authenticated. The TUI also sends “Wake up, my friend!” automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran. 1.Restart the Gateway: None moltbot gateway restart 2.Check status + auth: ## Page 366 None moltbot status moltbot models status moltbot logs --follow 3.If it still hangs, run: None moltbot doctor If the Gateway is remote, ensure the tunnel/Tailscale connection is up and that the UI is pointed at the right Gateway.
See Remote access. Can I migrate my setup to a new machine Mac mini without redoing onboarding Yes. Copy the state directory and workspace, then run Doctor once.
This keeps your bot “exactly the same” (memory, session history, auth, and channel state) as long as you copy both locations: ## Page 367 1.Install Moltbot on the new machine. 2.Copy $CLAWDBOT_STATE_DIR (default: ~/.clawdbot) from the old machine. 3.Copy your workspace (default: ~/clawd). 4.Run moltbot doctor and restart the Gateway service.
That preserves config, auth profiles, WhatsApp creds, sessions, and memory. If you’re in remote mode, remember the gateway host owns the session store and workspace. Important: if you only commit/push your workspace to GitHub, you’re backing up memory + bootstrap files, but not session history or auth. Those live under ~/.clawdbot/ (for example ~/.clawdbot/agents//sessions/).
Related: Migrating, Where things live on disk, Agent workspace, Doctor, Remote mode. Where do I see whats new in the latest version Check the GitHub changelog: https://github.com/moltbot/moltbot/blob/main/CHANGELOG.md Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).
Page 368 I cant access docsmoltbot SSL error What now Some Comcast/Xfinity connections incorrectly block docs.molt.bot via Xfinity Advanced Security. Disable it or allowlist docs.molt.bot, then retry. More detail: Troubleshooting. Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.
If you still can’t reach the site, the docs are mirrored on GitHub: https://github.com/moltbot/moltbot/tree/main/docs Whats the difference between stable and beta Stable and beta are npm dist‑tags, not separate code lines: latest = stable beta = early build for testing We ship builds to beta, test them, and once a build is solid we promote that same version to latest.
That’s why beta and stable can point at the same version. See what changed: https://github.com/moltbot/moltbot/blob/main/CHANGELOG.md ## Page 369 How do I install the beta version and whats the difference between beta and dev Beta is the npm dist‑tag beta (may match latest). Dev is the moving head of main (git); when published, it uses the npm dist‑tag dev.
One‑liners (macOS/Linux): None curl -fsSL --proto '=https' --tlsv1.2 https://molt.bot/install.sh | bash -s -- --beta None curl -fsSL --proto '=https' --tlsv1.2 https://molt.bot/install.sh | bash -s -- --install-method git Windows installer (PowerShell): https://molt.bot/install.ps1 More detail: Development channels and Installer flags. ## Page 370 How long does install and onboarding usually take Rough guide: Install: 2-5 minutes Onboarding: 5-15 minutes depending on how many channels/models you configure If it hangs, use Installer stuck and the fast debug loop in Im stuck. How do I try the latest bits Two options: 1.Dev channel (git checkout): None moltbot update --channel dev This switches to the main branch and updates from source. 2.Hackable install (from the installer site): ## Page 371 None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git That gives you a local repo you can edit, then update via git.
If you prefer a clean clone manually, use: None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm build Docs: Update, Development channels, Install. Installer stuck How do I get more feedback Re-run the installer with verbose output: None ## Page 372 curl -fsSL https://molt.bot/install.sh | bash -s -- --verbose Beta install with verbose: None curl -fsSL https://molt.bot/install.sh | bash -s -- --beta --verbose For a hackable (git) install: None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git --verbose More options: Installer flags. Windows install says git not found or moltbot not recognized ## Page 373 Two common Windows issues: 1) npm error spawn git / git not found Install Git for Windows and make sure git is on your PATH. Close and reopen PowerShell, then re-run the installer.
- moltbot is not recognized after install Your npm global bin folder is not on PATH. Check the path: None npm config get prefix Ensure
\bin is on PATH (on most systems it is %AppData%\npm). Close and reopen PowerShell after updating PATH. If you want the smoothest Windows setup, use WSL2 instead of native Windows.
Docs: Windows. The docs didnt answer my question how do I get a better answer ## Page 374 Use the hackable (git) install so you have the full source and docs locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely. None curl -fsSL https://molt.bot/install.sh | bash -s -- --install-method git More detail: Install and Installer flags. How do I install Moltbot on Linux Short answer: follow the Linux guide, then run the onboarding wizard.
Linux quick path + service install: Linux. Full walkthrough: Getting Started. Installer + updates: Install & updates. How do I install Moltbot on a VPS Any Linux VPS works.
Install on the server, then use SSH/Tailscale to reach the Gateway. ## Page 375 Guides: exe.dev, Hetzner, Fly.io. Remote access: Gateway remote. Where are the cloudVPS install guides We keep a hosting hub with the common providers.
Pick one and follow the guide: VPS hosting (all providers in one place) Fly.io Hetzner exe.dev How it works in the cloud: the Gateway runs on the server, and you access it from your laptop/phone via the Control UI (or Tailscale/SSH). Your state + workspace live on the server, so treat the host as the source of truth and back it up. You can pair nodes (Mac/iOS/Android/headless) to that cloud Gateway to access local screen/camera/canvas or run commands on your laptop while keeping the Gateway in the cloud. Hub: Platforms.
Remote access: Gateway remote. Nodes: Nodes, Nodes CLI. ## Page 376 Can I ask Clawd to update itself Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation.
Safer: run updates from a shell as the operator. Use the CLI: None moltbot update moltbot update status moltbot update --channel stable|beta|dev moltbot update --tag <dist-tag|version> moltbot update --no-restart If you must automate from an agent: None moltbot update --yes --no-restart moltbot gateway restart Docs: Update, Updating. ## Page 377 What does the onboarding wizard actually do moltbot onboard is the recommended setup path. In local mode it walks you through: Model/auth setup (Anthropic setup-token recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported) Workspace location + bootstrap files Gateway settings (bind/port/auth/tailscale) Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage) Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2) Health checks and skills selection It also warns if your configured model is unknown or missing auth.
Do I need a Claude or OpenAI subscription to run this No. You can run Moltbot with API keys (Anthropic/OpenAI/others) or with local‑only models so your data stays on your device. Subscriptions (Claude Pro/Max or OpenAI Codex) are optional ways to authenticate those providers. Docs: Anthropic, OpenAI, Local models, Models.
## Page 378 Can I use Claude Max subscription without an API key Yes. You can authenticate with a setup-token instead of an API key.
This is the subscription path. Claude Pro/Max subscriptions do not include an API key, so this is the correct approach for subscription accounts. Important: you must verify with Anthropic that this usage is allowed under their subscription policy and terms. If you want the most explicit, supported path, use an Anthropic API key.
How does Anthropic setuptoken auth work claude setup-token generates a token string via the Claude Code CLI (it is not available in the web console). You can run it on any machine. Choose Anthropic token (paste setup-token) in the wizard or paste it with moltbot models auth paste-token --provider anthropic. The token is stored as an auth profile for the anthropic provider and used like an API key (no auto-refresh).
More detail: OAuth. Where do I find an Anthropic setuptoken ## Page 379 It is not in the Anthropic Console. The setup-token is generated by the Claude Code CLI on any machine: None claude setup-token Copy the token it prints, then choose Anthropic token (paste setup-token) in the wizard. If you want to run it on the gateway host, use moltbot models auth setup-token --provider anthropic.
If you ran claude setup-token elsewhere, paste it on the gateway host with moltbot models auth paste-token --provider anthropic. See Anthropic. Do you support Claude subscription auth (Claude Pro/Max) Yes — via setup-token. Moltbot no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key.
Generate the token anywhere and paste it on the gateway host. See Anthropic and OAuth.
Note: Claude subscription access is governed by Anthropic’s terms. For production or multi‑user workloads, API keys are usually the safer choice. ## Page 380 Why am I seeing HTTP 429 ratelimiterror from Anthropic That means your Anthropic quota/rate limit is exhausted for the current window. If you use a Claude subscription (setup‑token or Claude Code OAuth), wait for the window to reset or upgrade your plan.
If you use an Anthropic API key, check the Anthropic Console for usage/billing and raise limits as needed. Tip: set a fallback model so Moltbot can keep replying while a provider is rate‑limited. See Models and OAuth. Is AWS Bedrock supported Yes - via pi‑ai’s Amazon Bedrock (Converse) provider with manual config.
You must supply AWS credentials/region on the gateway host and add a Bedrock provider entry in your models config. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI‑compatible proxy in front of Bedrock is still a valid option. How does Codex auth work ## Page 381 Moltbot supports OpenAI Code (Codex) via OAuth (ChatGPT sign-in).
The wizard can run the OAuth flow and will set the default model to openai-codex/gpt-5.2 when appropriate. See Model providers and Wizard. Do you support OpenAI subscription auth Codex OAuth Yes. Moltbot fully supports OpenAI Code (Codex) subscription OAuth.
The onboarding wizard can run the OAuth flow for you. See OAuth, Model providers, and Wizard. How do I set up Gemini CLI OAuth Gemini CLI uses a plugin auth flow, not a client id or secret in moltbot.json. Steps: 1.Enable the plugin: moltbot plugins enable google-gemini-cli-auth 2.Login: moltbot models auth login --provider google-gemini-cli --set-default This stores OAuth tokens in auth profiles on the gateway host.
Details: Model providers. ## Page 382 Is a local model OK for casual chats Usually no. Moltbot needs large context + strong safety; small cards truncate and leak. If you must, run the largest MiniMax M2.1 build you can locally (LM Studio) and see /gateway/local-models.
Smaller/quantized models increase prompt-injection risk - see Security. How do I keep hosted model traffic in a specific region Pick region-pinned endpoints. OpenRouter exposes US-hosted options for MiniMax, Kimi, and GLM; choose the US-hosted variant to keep data in-region. You can still list Anthropic/OpenAI alongside these by using models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.
Do I have to buy a Mac Mini to install this ## Page 383 No. Moltbot runs on macOS or Linux (Windows via WSL2). A Mac mini is optional - some people buy one as an always‑on host, but a small VPS, home server, or Raspberry Pi‑class box works too. You only need a Mac for macOS‑only tools.
For iMessage, you can keep the Gateway on Linux and run imsg on any Mac over SSH by pointing channels.imessage.cliPath at an SSH wrapper. If you want other macOS‑only tools, run the Gateway on a Mac or pair a macOS node. Docs: iMessage, Nodes, Mac remote mode. Do I need a Mac mini for iMessage support You need some macOS device signed into Messages.
It does not have to be a Mac mini - any Mac works. Moltbot’s iMessage integrations run on macOS (BlueBubbles or imsg), while the Gateway can run elsewhere. Common setups: Run the Gateway on Linux/VPS, and point channels.imessage.cliPath at an SSH wrapper that runs imsg on the Mac. Run everything on the Mac if you want the simplest single‑machine setup.
Docs: iMessage, BlueBubbles, Mac remote mode. ## Page 384 If I buy a Mac mini to run Moltbot can I connect it to my MacBook Pro Yes. The Mac mini can run the Gateway, and your MacBook Pro can connect as a node (companion device). Nodes don’t run the Gateway - they provide extra capabilities like screen/camera/canvas and system.run on that device.
Common pattern: Gateway on the Mac mini (always‑on). MacBook Pro runs the macOS app or a node host and pairs to the Gateway. Use moltbot nodes status / moltbot nodes list to see it. Docs: Nodes, Nodes CLI.
Can I use Bun Bun is not recommended. We see runtime bugs, especially with WhatsApp and Telegram. Use Node for stable gateways. If you still want to experiment with Bun, do it on a non‑production gateway without WhatsApp/Telegram.
## Page 385 Telegram what goes in allowFrom channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric, recommended) or @username. It is not the bot username. Safer (no third-party bot): DM your bot, then run moltbot logs --follow and read from.id. Official Bot API: DM your bot, then call https://api.telegram.org/bot<bot_token>/getUpdates and read message.from.id.
Third-party (less private): DM @userinfobot or @getidsbot. See /channels/telegram. Can multiple people use one WhatsApp number with different Moltbots Yes, via multi‑agent routing. Bind each sender’s WhatsApp DM (peer kind: "dm", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store.
Replies still come from the same WhatsApp account, and DM access control ## Page 386 (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp. Can I run a fast chat agent and an Opus for coding agent Yes. Use multi‑agent routing: give each agent its own default model, then bind inbound routes (provider account or specific peers) to each agent.
Example config lives in Multi-Agent Routing. See also Models and Configuration. Does Homebrew work on Linux Yes. Homebrew supports Linux (Linuxbrew).
Quick setup: None /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.s h)" echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.profile eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" brew install ## Page 387 If you run Moltbot via systemd, ensure the service PATH includes /home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non‑login shells. Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set. Whats the difference between the hackable git install and npm install Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
npm install: global CLI install, no repo, best for “just run it.” Updates come from npm dist‑tags. Docs: Getting started, Updating. ## Page 388 Can I switch between npm and git installs later Yes. Install the other flavor, then run Doctor so the gateway service points at the new entrypoint.
This does not delete your data - it only changes the Moltbot code install. Your state (/.clawdbot) and workspace (/clawd) stay untouched. From npm → git: None git clone https://github.com/moltbot/moltbot.git cd moltbot pnpm install pnpm build moltbot doctor moltbot gateway restart From git → npm: None npm install -g moltbot@latest moltbot doctor moltbot gateway restart Doctor detects a gateway service entrypoint mismatch and offers to rewrite the service config to match the current install (use --repair in automation). ## Page 389 Backup tips: see Backup strategy.
Should I run the Gateway on my laptop or a VPS Short answer: if you want 24/7 reliability, use a VPS. If you want the lowest friction and you’re okay with sleep/restarts, run it locally. Laptop (local Gateway) Pros: no server cost, direct access to local files, live browser window. Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
VPS / cloud Pros: always‑on, stable network, no laptop sleep issues, easier to keep running. Cons: often run headless (use screenshots), remote file access only, you must SSH for updates. Moltbot-specific note: WhatsApp/Telegram/Slack/Mattermost (plugin)/Discord all work fine from a VPS. The only real trade-off is headless browser vs a visible window.
See Browser. Recommended default: VPS if you had gateway disconnects before. Local is great when you’re actively using the Mac and want local file access or UI automation with a visible browser. ## Page 390 How important is it to run Moltbot on a dedicated machine Not required, but recommended for reliability and isolation.
Dedicated host (VPS/Mac mini/Pi): always‑on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running. Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates. If you want the best of both worlds, keep the Gateway on a dedicated host and pair your laptop as a node for local screen/camera/exec tools. See Nodes.
For security guidance, read Security. What are the minimum VPS requirements and recommended OS Moltbot is lightweight. For a basic Gateway + one chat channel: Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk. Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels).
Node tools and browser automation can be resource hungry. OS: use Ubuntu LTS (or any modern Debian/Ubuntu). The Linux install path is best tested there. ## Page 391 Docs: Linux, VPS hosting.
Can I run Moltbot in a VM and what are the requirements Yes. Treat a VM the same as a VPS: it needs to be always on, reachable, and have enough RAM for the Gateway and any channels you enable. Baseline guidance: Absolute minimum: 1 vCPU, 1GB RAM. Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
OS: Ubuntu LTS or another modern Debian/Ubuntu. If you are on Windows, WSL2 is the easiest VM style setup and has the best tooling compatibility. See Windows, VPS hosting. If you are running macOS in a VM, see macOS VM.
What is Moltbot? What is Moltbot in one paragraph ## Page 392 Moltbot is a personal AI assistant you run on your own devices. It replies on the messaging surfaces you already use (WhatsApp, Telegram, Slack, Mattermost (plugin), Discord, Google Chat, Signal, iMessage, WebChat) and can also do voice + a live Canvas on supported platforms. The Gateway is the always-on control plane; the assistant is the product.
Whats the value proposition Moltbot is not “just a Claude wrapper.” It’s a local-first control plane that lets you run a capable assistant on your own hardware, reachable from the chat apps you already use, with stateful sessions, memory, and tools - without handing control of your workflows to a hosted SaaS. Highlights: Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local. Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord/Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms. Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per‑agent routing and failover.
Local-only option: run local models so all data can stay on your device if you want. Multi-agent routing: separate agents per channel, account, or task, each with its own workspace and defaults. Open source and hackable: inspect, extend, and self-host without vendor lock‑in. ## Page 393 Docs: Gateway, Channels, Multi‑agent, Memory.
I just set it up what should I do first Good first projects: Build a website (WordPress, Shopify, or a simple static site). Prototype a mobile app (outline, screens, API plan). Organize files and folders (cleanup, naming, tagging). Connect Gmail and automate summaries or follow ups.
It can handle large tasks, but it works best when you split them into phases and use sub agents for parallel work. What are the top five everyday use cases for Moltbot Everyday wins usually look like: Personal briefings: summaries of inbox, calendar, and news you care about. Research and drafting: quick research, summaries, and first drafts for emails or docs. Reminders and follow ups: cron or heartbeat driven nudges and checklists.
Browser automation: filling forms, collecting data, and repeating web tasks. ## Page 394 Cross device coordination: send a task from your phone, let the Gateway run it on a server, and get the result back in chat. Can Moltbot help with lead gen outreach ads and blogs for a SaaS Yes for research, qualification, and drafting. It can scan sites, build shortlists, summarize prospects, and write outreach or ad copy drafts.
For outreach or ad runs, keep a human in the loop. Avoid spam, follow local laws and platform policies, and review anything before it is sent. The safest pattern is to let Moltbot draft and you approve. Docs: Security.
What are the advantages vs Claude Code for web development Moltbot is a personal assistant and coordination layer, not an IDE replacement. Use Claude Code or Codex for the fastest direct coding loop inside a repo. Use Moltbot when you want durable memory, cross-device access, and tool orchestration. ## Page 395 Advantages: Persistent memory + workspace across sessions Multi-platform access (WhatsApp, Telegram, TUI, WebChat) Tool orchestration (browser, files, scheduling, hooks) Always-on Gateway (run on a VPS, interact from anywhere) Nodes for local browser/screen/camera/exec Showcase: https://molt.bot/showcase Skills and automation How do I customize skills without keeping the repo dirty Use managed overrides instead of editing the repo copy.
Put your changes in ~/.clawdbot/skills//SKILL.md (or add a folder via skills.load.extraDirs in ~/.clawdbot/moltbot.json). Precedence is /skills > ~/.clawdbot/skills > bundled, so managed overrides win without touching git. Only upstream-worthy edits should live in the repo and go out as PRs. Can I load skills from a custom folder ## Page 396 Yes.
Add extra directories via skills.load.extraDirs in ~/.clawdbot/moltbot.json (lowest precedence). Default precedence remains: /skills → ~/.clawdbot/skills → bundled → skills.load.extraDirs. clawdhub installs into ./skills by default, which Moltbot treats as /skills. How can I use different models for different tasks Today the supported patterns are: Cron jobs: isolated jobs can set a model override per job.
Sub-agents: route tasks to separate agents with different default models. On-demand switch: use /model to switch the current session model at any time. See Cron jobs, Multi-Agent Routing, and Slash commands. The bot freezes while doing heavy work How do I offload that Use sub-agents for long or parallel tasks.
Sub-agents run in their own session, return a summary, and keep your main chat responsive. ## Page 397 Ask your bot to “spawn a sub-agent for this task” or use /subagents. Use /status in chat to see what the Gateway is doing right now (and whether it is busy). Token tip: long tasks and sub-agents both consume tokens.
If cost is a concern, set a cheaper model for sub-agents via agents.defaults.subagents.model. Docs: Sub-agents. Cron or reminders do not fire What should I check Cron runs inside the Gateway process. If the Gateway is not running continuously, scheduled jobs will not run.
Checklist: Confirm cron is enabled (cron.enabled) and CLAWDBOT_SKIP_CRON is not set. Check the Gateway is running 24/7 (no sleep/restarts). Verify timezone settings for the job (--tz vs host timezone). Debug: None moltbot cron run --force moltbot cron runs --id --limit 50 ## Page 398 Docs: Cron jobs, Cron vs Heartbeat.
How do I install skills on Linux Use ClawdHub (CLI) or drop skills into your workspace. The macOS Skills UI isn’t available on Linux. Browse skills at https://clawdhub.com. Install the ClawdHub CLI (pick one package manager): None npm i -g clawdhub None pnpm add -g clawdhub ## Page 399 Can Moltbot run tasks on a schedule or continuously in the background Yes.
Use the Gateway scheduler: Cron jobs for scheduled or recurring tasks (persist across restarts). Heartbeat for “main session” periodic checks. Isolated jobs for autonomous agents that post summaries or deliver to chats. Docs: Cron jobs, Cron vs Heartbeat, Heartbeat.
Can I run Apple macOS only skills from Linux Not directly. macOS skills are gated by metadata.clawdbot.os plus required binaries, and skills only appear in the system prompt when they are eligible on the Gateway host. On Linux, darwin-only skills (like imsg, apple-notes, apple-reminders) will not load unless you override the gating. You have three supported patterns: Option A - run the Gateway on a Mac (simplest).
Run the Gateway where the macOS binaries exist, then connect from Linux in remote mode or over Tailscale. The skills load normally because the Gateway host is macOS. Option B - use a macOS node (no SSH). ## Page 400 Run the Gateway on Linux, pair a macOS node (menubar app), and set Node Run Commands to “Always Ask” or “Always Allow” on the Mac.
Moltbot can treat macOS-only skills as eligible when the required binaries exist on the node. The agent runs those skills via the nodes tool. If you choose “Always Ask”, approving “Always Allow” in the prompt adds that command to the allowlist. Option C - proxy macOS binaries over SSH (advanced).
Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac.
Then override the skill to allow Linux so it stays eligible. 1.Create an SSH wrapper for the binary (example: imsg): None 2. #!/usr/bin/env bash set -euo pipefail exec ssh -T user@mac-host /opt/homebrew/bin/imsg "$@" 3. 4.Put the wrapper on PATH on the Linux host (for example ~/bin/imsg). 5.Override the skill metadata (workspace or ~/.clawdbot/skills) to allow Linux: None 6. --- name: imsg ## Page 401 description: iMessage/SMS CLI for listing chats, history, watch, and sending. metadata: {"moltbot":{"os":["darwin","linux"],"requires":{"bins":["imsg"]}} } --- 7. 8.Start a new session so the skills snapshot refreshes.
For iMessage specifically, you can also point channels.imessage.cliPath at an SSH wrapper (Moltbot only needs stdio). See iMessage. Do you have a Notion or HeyGen integration Not built‑in today. Options: Custom skill / plugin: best for reliable API access (Notion/HeyGen both have APIs).
Browser automation: works without code but is slower and more fragile. If you want to keep context per client (agency workflows), a simple pattern is: ## Page 402 One Notion page per client (context + preferences + active work). Ask the agent to fetch that page at the start of a session. If you want a native integration, open a feature request or build a skill targeting those APIs.
Install skills: None clawdhub install clawdhub update --all ClawdHub installs into ./skills under your current directory (or falls back to your configured Moltbot workspace); Moltbot treats that as /skills on the next session. For shared skills across agents, place them in ~/.clawdbot/skills//SKILL.md. Some skills expect binaries installed via Homebrew; on Linux that means Linuxbrew (see the Homebrew Linux FAQ entry above). See Skills and ClawdHub.
How do I install the Chrome extension for browser takeover Use the built-in installer, then load the unpacked extension in Chrome: ## Page 403 None moltbot browser extension install moltbot browser extension path Then Chrome → chrome://extensions → enable “Developer mode” → “Load unpacked” → pick that folder. Full guide (including remote Gateway + security notes): Chrome extension If the Gateway runs on the same machine as Chrome (default setup), you usually do not need anything extra. If the Gateway runs elsewhere, run a node host on the browser machine so the Gateway can proxy browser actions. You still need to click the extension button on the tab you want to control (it doesn’t auto-attach).
Sandboxing and memory Is there a dedicated sandboxing doc Yes. See Sandboxing. For Docker-specific setup (full gateway in Docker or sandbox images), see Docker. ## Page 404 Can I keep DMs personal but make groups public sandboxed with one agent Yes - if your private traffic is DMs and your public traffic is groups.
Use agents.defaults.sandbox.mode: "non-main" so group/channel sessions (non-main keys) run in Docker, while the main DM session stays on-host.
Then restrict what tools are available in sandboxed sessions via tools.sandbox.tools. Setup walkthrough + example config: Groups: personal DMs + public groups Key config reference: Gateway configuration How do I bind a host folder into the sandbox Set agents.defaults.sandbox.docker.binds to ["host:path:mode"] (e.g., "/home/user/src:/src:ro"). Global + per-agent binds merge; per-agent binds are ignored when scope: "shared". Use :ro for anything sensitive and remember binds bypass the sandbox filesystem walls.
See Sandboxing and Sandbox vs Tool Policy vs Elevated for examples and safety notes. ## Page 405 How does memory work Moltbot memory is just Markdown files in the agent workspace: Daily notes in memory/YYYY-MM-DD.md Curated long-term notes in MEMORY.md (main/private sessions only) Moltbot also runs a silent pre-compaction memory flush to remind the model to write durable notes before auto-compaction.
This only runs when the workspace is writable (read-only sandboxes skip it). See Memory. Memory keeps forgetting things How do I make it stick Ask the bot to write the fact to memory. Long-term notes belong in MEMORY.md, short-term context goes into memory/YYYY-MM-DD.md.
This is still an area we are improving. It helps to remind the model to store memories; it will know what to do. If it keeps forgetting, verify the Gateway is using the same workspace on every run. Docs: Memory, Agent workspace.
Does semantic memory search require an OpenAI API key ## Page 406 Only if you use OpenAI embeddings. Codex OAuth covers chat/completions and does not grant embeddings access, so signing in with Codex (OAuth or the Codex CLI login) does not help for semantic memory search. OpenAI embeddings still need a real API key (OPENAI_API_KEY or models.providers.openai.apiKey). If you don’t set a provider explicitly, Moltbot auto-selects a provider when it can resolve an API key (auth profiles, models.providers.*.apiKey, or env vars).
It prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key resolves. If neither key is available, memory search stays disabled until you configure it. If you have a local model path configured and present, Moltbot prefers local. If you’d rather stay local, set memorySearch.provider = "local" (and optionally memorySearch.fallback = "none").
If you want Gemini embeddings, set memorySearch.provider = "gemini" and provide GEMINI_API_KEY (or memorySearch.remote.apiKey). We support OpenAI, Gemini, or local embedding models - see Memory for the setup details. Does memory persist forever What are the limits Memory files live on disk and persist until you delete them. The limit is your storage, not the model.
The session context is still limited by the model context window, so long conversations can ## Page 407 compact or truncate.
That is why memory search exists - it pulls only the relevant parts back into context. Docs: Memory, Context. Where things live on disk Is all data used with Moltbot saved locally No - Moltbot’s state is local, but external services still see what you send them. Local by default: sessions, memory files, config, and workspace live on the Gateway host (~/.clawdbot + your workspace directory).
Remote by necessity: messages you send to model providers (Anthropic/OpenAI/etc.) go to their APIs, and chat platforms (WhatsApp/Telegram/Slack/etc.) store message data on their servers. You control the footprint: using local models keeps prompts on your machine, but channel traffic still goes through the channel’s servers. Related: Agent workspace, Memory. ## Page 408 Where does Moltbot store its data Everything lives under $CLAWDBOT_STATE_DIR (default: ~/.clawdbot): ath urpose CLAWDBOT_STATE_DIR/moltbot.json ain config (JSON5) CLAWDBOT_STATE_DIR/credentials/oaut egacy OAuth import (copied into auth json rofiles on first use) CLAWDBOT_STATE_DIR/agents/ uth profiles (OAuth + API keys) gent/auth-profiles.json CLAWDBOT_STATE_DIR/agents/ untime auth cache (managed gent/auth.json utomatically) CLAWDBOT_STATE_DIR/credentials/ rovider state (e.g.
hatsapp//creds.json) CLAWDBOT_STATE_DIR/agents/ er‑agent state (agentDir + sessions) CLAWDBOT_STATE_DIR/agents/ onversation history & state (per essions/ gent) CLAWDBOT_STATE_DIR/agents/ ession metadata (per agent) essions/sessions.json ## Page 409 Legacy single‑agent path: ~/.clawdbot/agent/* (migrated by moltbot doctor). Your workspace (AGENTS.md, memory files, skills, etc.) is separate and configured via agents.defaults.workspace (default: ~/clawd). Where should AGENTSmd SOULmd USERmd MEMORYmd live These files live in the agent workspace, not ~/.clawdbot. Workspace (per agent): AGENTS.md, SOUL.md, IDENTITY.md, USER.md, MEMORY.md (or memory.md), memory/YYYY-MM-DD.md, optional HEARTBEAT.md.
State dir (/.clawdbot): config, credentials, auth profiles, sessions, logs, and shared skills (/.clawdbot/skills). Default workspace is /clawd, configurable via: None { agents: { defaults: { workspace: "/clawd" } } } ## Page 410 If the bot “forgets” after a restart, confirm the Gateway is using the same workspace on every launch (and remember: remote mode uses the gateway host’s workspace, not your local laptop). Tip: if you want a durable behavior or preference, ask the bot to write it into AGENTS.md or MEMORY.md rather than relying on chat history. See Agent workspace and Memory.
Whats the recommended backup strategy Put your agent workspace in a private git repo and back it up somewhere private (for example GitHub private).
This captures memory + AGENTS/SOUL/USER files, and lets you restore the assistant’s “mind” later. Do not commit anything under ~/.clawdbot (credentials, sessions, tokens). If you need a full restore, back up both the workspace and the state directory separately (see the migration question above). Docs: Agent workspace.
## Page 411 How do I completely uninstall Moltbot See the dedicated guide: Uninstall. Can agents work outside the workspace Yes. The workspace is the default cwd and memory anchor, not a hard sandbox. Relative paths resolve inside the workspace, but absolute paths can access other host locations unless sandboxing is enabled.
If you need isolation, use agents.defaults.sandbox or per‑agent sandbox settings. If you want a repo to be the default working directory, point that agent’s workspace to the repo root. The Moltbot repo is just source code; keep the workspace separate unless you intentionally want the agent to work inside it. Example (repo as default cwd): None { agents: { defaults: { workspace: "~/Projects/my-repo" } } } ## Page 412 Im in remote mode where is the session store Session state is owned by the gateway host.
If you’re in remote mode, the session store you care about is on the remote machine, not your local laptop. See Session management. Config basics What format is the config Where is it Moltbot reads an optional JSON5 config from $CLAWDBOT_CONFIG_PATH (default: ~/.clawdbot/moltbot.json): None $CLAWDBOT_CONFIG_PATH If the file is missing, it uses safe‑ish defaults (including a default workspace of ~/clawd). ## Page 413 I set gatewaybind lan or tailnet and now nothing listens the UI says unauthorized Non-loopback binds require auth.
Configure gateway.auth.mode + gateway.auth.token (or use CLAWDBOT_GATEWAY_TOKEN). None { gateway: { bind: "lan", auth: { mode: "token", token: "replace-me" } } } Notes: gateway.remote.token is for remote CLI calls only; it does not enable local gateway auth. The Control UI authenticates via connect.params.auth.token (stored in app/UI settings). Avoid putting tokens in URLs.
## Page 414 Why do I need a token on localhost now The wizard generates a gateway token by default (even on loopback) so local WS clients must authenticate.
This blocks other local processes from calling the Gateway. Paste the token into the Control UI settings (or your client config) to connect. If you really want open loopback, remove gateway.auth from your config. Doctor can generate a token for you any time: moltbot doctor --generate-gateway-token.
Do I have to restart after changing config The Gateway watches the config and supports hot‑reload: gateway.reload.mode: "hybrid" (default): hot‑apply safe changes, restart for critical ones hot, restart, off are also supported How do I enable web search and web fetch web_fetch works without an API key. web_search requires a Brave Search API key. Recommended: run moltbot configure --section web to store it in ## Page 415 tools.web.search.apiKey. Environment alternative: set BRAVE_API_KEY for the Gateway process.
None { tools: { web: { search: { enabled: true, apiKey: "BRAVE_API_KEY_HERE", maxResults: 5 }, fetch: { enabled: true } } } } Notes: If you use allowlists, add web_search/web_fetch or group:web. web_fetch is enabled by default (unless explicitly disabled). Daemons read env vars from ~/.clawdbot/.env (or the service environment). Docs: Web tools.
## Page 416 How do I run a central Gateway with specialized workers across devices The common pattern is one Gateway (e.g. Raspberry Pi) plus nodes and agents: Gateway (central): owns channels (Signal/WhatsApp), routing, and sessions. Nodes (devices): Macs/iOS/Android connect as peripherals and expose local tools (system.run, canvas, camera). Agents (workers): separate brains/workspaces for special roles (e.g.
“Hetzner ops”, “Personal data”). Sub‑agents: spawn background work from a main agent when you want parallelism. TUI: connect to the Gateway and switch agents/sessions. Docs: Nodes, Remote access, Multi-Agent Routing, Sub-agents, TUI.
Can the Moltbot browser run headless Yes. It’s a config option: None { browser: { headless: true }, agents: { defaults: { sandbox: { browser: { headless: true } } } ## Page 417 } } Default is false (headful). Headless is more likely to trigger anti‑bot checks on some sites. See Browser.
Headless uses the same Chromium engine and works for most automation (forms, clicks, scraping, logins). The main differences: No visible browser window (use screenshots if you need visuals). Some sites are stricter about automation in headless mode (CAPTCHAs, anti‑bot).
For example, X/Twitter often blocks headless sessions. How do I use Brave for browser control Set browser.executablePath to your Brave binary (or any Chromium-based browser) and restart the Gateway. See the full config examples in Browser. Remote gateways + nodes ## Page 418 How do commands propagate between Telegram the gateway and nodes Telegram messages are handled by the gateway.
The gateway runs the agent and only then calls nodes over the Gateway WebSocket when a node tool is needed: Telegram → Gateway → Agent → node.* → Node → Gateway → Telegram Nodes don’t see inbound provider traffic; they only receive node RPC calls. How can my agent access my computer if the Gateway is hosted remotely Short answer: pair your computer as a node. The Gateway runs elsewhere, but it can call node.* tools (screen, camera, system) on your local machine over the Gateway WebSocket. Typical setup: 1.Run the Gateway on the always‑on host (VPS/home server).
2.Put the Gateway host + your computer on the same tailnet. 3.Ensure the Gateway WS is reachable (tailnet bind or SSH tunnel). 4.Open the macOS app locally and connect in Remote over SSH mode (or direct tailnet) so it can register as a node. ## Page 419 5.Approve the node on the Gateway: None 6. moltbot nodes pending moltbot nodes approve 7. No separate TCP bridge is required; nodes connect over the Gateway WebSocket.
Security reminder: pairing a macOS node allows system.run on that machine. Only pair devices you trust, and review Security. Docs: Nodes, Gateway protocol, macOS remote mode, Security. Tailscale is connected but I get no replies What now Check the basics: Gateway is running: moltbot gateway status Gateway health: moltbot status Channel health: moltbot channels status Then verify auth and routing: ## Page 420 If you use Tailscale Serve, make sure gateway.auth.allowTailscale is set correctly.
If you connect via SSH tunnel, confirm the local tunnel is up and points at the right port. Confirm your allowlists (DM or group) include your account. Docs: Tailscale, Remote access, Channels. Can two Moltbots talk to each other local VPS Yes.
There is no built-in “bot-to-bot” bridge, but you can wire it up in a few reliable ways: Simplest: use a normal chat channel both bots can access (Telegram/Slack/WhatsApp). Have Bot A send a message to Bot B, then let Bot B reply as usual. CLI bridge (generic): run a script that calls the other Gateway with moltbot agent --message ... --deliver, targeting a chat where the other bot listens.
If one bot is on a remote VPS, point your CLI at that remote Gateway via SSH/Tailscale (see Remote access). Example pattern (run from a machine that can reach the target Gateway): None ## Page 421 moltbot agent --message "Hello from local bot" --deliver --channel telegram --reply-to Tip: add a guardrail so the two bots do not loop endlessly (mention-only, channel allowlists, or a “do not reply to bot messages” rule). Docs: Remote access, Agent CLI, Agent send. Do I need separate VPSes for multiple agents No.
One Gateway can host multiple agents, each with its own workspace, model defaults, and routing.
That is the normal setup and it is much cheaper and simpler than running one VPS per agent. Use separate VPSes only when you need hard isolation (security boundaries) or very different configs that you do not want to share. Otherwise, keep one Gateway and use multiple agents or sub-agents. ## Page 422 Is there a benefit to using a node on my personal laptop instead of SSH from a VPS Yes - nodes are the first‑class way to reach your laptop from a remote Gateway, and they unlock more than shell access.
The Gateway runs on macOS/Linux (Windows via WSL2) and is lightweight (a small VPS or Raspberry Pi-class box is fine; 4 GB RAM is plenty), so a common setup is an always‑on host plus your laptop as a node. No inbound SSH required. Nodes connect out to the Gateway WebSocket and use device pairing. Safer execution controls.
system.run is gated by node allowlists/approvals on that laptop. More device tools. Nodes expose canvas, camera, and screen in addition to system.run. Local browser automation.
Keep the Gateway on a VPS, but run Chrome locally and relay control with the Chrome extension + a node host on the laptop. SSH is fine for ad‑hoc shell access, but nodes are simpler for ongoing agent workflows and device automation. Docs: Nodes, Nodes CLI, Chrome extension. Should I install on a second laptop or just add a node If you only need local tools (screen/camera/exec) on the second laptop, add it as a node.
That keeps a single Gateway and avoids duplicated config. Local node tools are currently macOS-only, but we plan to extend them to other OSes. ## Page 423 Install a second Gateway only when you need hard isolation or two fully separate bots. Docs: Nodes, Nodes CLI, Multiple gateways.
Do nodes run a gateway service No. Only one gateway should run per host unless you intentionally run isolated profiles (see Multiple gateways). Nodes are peripherals that connect to the gateway (iOS/Android nodes, or macOS “node mode” in the menubar app). For headless node hosts and CLI control, see Node host CLI.
A full restart is required for gateway, discovery, and canvasHost changes. Is there an API RPC way to apply config Yes. config.apply validates + writes the full config and restarts the Gateway as part of the operation. ## Page 424 configapply wiped my config How do I recover and avoid this config.apply replaces the entire config.
If you send a partial object, everything else is removed. Recover: Restore from backup (git or a copied ~/.clawdbot/moltbot.json). If you have no backup, re-run moltbot doctor and reconfigure channels/models. If this was unexpected, file a bug and include your last known config or any backup.
A local coding agent can often reconstruct a working config from logs or history. Avoid it: Use moltbot config set for small changes. Use moltbot configure for interactive edits. Docs: Config, Configure, Doctor.
Whats a minimal sane config for a first install None { ## Page 425 agents: { defaults: { workspace: "~/clawd" } }, channels: { whatsapp: { allowFrom: ["+15555550123"] } } } This sets your workspace and restricts who can trigger the bot. How do I set up Tailscale on a VPS and connect from my Mac Minimal steps: 1.Install + login on the VPS None 2. curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up 3. 4.Install + login on your Mac Use the Tailscale app and sign in to the same tailnet. 5.Enable MagicDNS (recommended) In the Tailscale admin console, enable MagicDNS so the VPS has a stable name. 6.Use the tailnet hostname SSH: ssh user@your-vps.tailnet-xxxx.ts.net Gateway WS: ws://your-vps.tailnet-xxxx.ts.net:18789 ## Page 426 If you want the Control UI without SSH, use Tailscale Serve on the VPS: None moltbot gateway --tailscale serve This keeps the gateway bound to loopback and exposes HTTPS via Tailscale.
See Tailscale. How do I connect a Mac node to a remote Gateway Tailscale Serve Serve exposes the Gateway Control UI + WS. Nodes connect over the same Gateway WS endpoint. Recommended setup: 1.Make sure the VPS + Mac are on the same tailnet.
2.Use the macOS app in Remote mode (SSH target can be the tailnet hostname). The app will tunnel the Gateway port and connect as a node. 3.Approve the node on the gateway: None 4. ## Page 427 moltbot nodes pending moltbot nodes approve 5. Docs: Gateway protocol, Discovery, macOS remote mode. Env vars and .env loading How does Moltbot load environment variables Moltbot reads env vars from the parent process (shell, launchd/systemd, CI, etc.) and additionally loads: .env from the current working directory a global fallback .env from ~/.clawdbot/.env (aka $CLAWDBOT_STATE_DIR/.env) Neither .env file overrides existing env vars.
You can also define inline env vars in config (applied only if missing from the process env): None ## Page 428 { env: { OPENROUTER_API_KEY: "sk-or-...", vars: { GROQ_API_KEY: "gsk-..." } } } See /environment for full precedence and sources. I started the Gateway via the service and my env vars disappeared What now Two common fixes: 1.Put the missing keys in ~/.clawdbot/.env so they’re picked up even when the service doesn’t inherit your shell env. 2.Enable shell import (opt‑in convenience): None { env: { shellEnv: { enabled: true, timeoutMs: 15000 } ## Page 429 } } This runs your login shell and imports only missing expected keys (never overrides). Env var equivalents: CLAWDBOT_LOAD_SHELL_ENV=1, CLAWDBOT_SHELL_ENV_TIMEOUT_MS=15000.
I set COPILOTGITHUBTOKEN but models status shows Shell env off Why moltbot models status reports whether shell env import is enabled. “Shell env: off” does not mean your env vars are missing - it just means Moltbot won’t load your login shell automatically. If the Gateway runs as a service (launchd/systemd), it won’t inherit your shell environment. Fix by doing one of these: 1.Put the token in ~/.clawdbot/.env: None 2. COPILOT_GITHUB_TOKEN=...
3. ## Page 430 4.Or enable shell import (env.shellEnv.enabled: true). 5.Or add it to your config env block (applies only if missing).
Then restart the gateway and recheck: None moltbot models status Copilot tokens are read from COPILOT_GITHUB_TOKEN (also GH_TOKEN / GITHUB_TOKEN). See /concepts/model-providers and /environment. Sessions & multiple chats How do I start a fresh conversation Send /new or /reset as a standalone message. See Session management.
Do sessions reset automatically if I never send new ## Page 431 Yes. Sessions expire after session.idleMinutes (default 60). The next message starts a fresh session id for that chat key.
This does not delete transcripts - it just starts a new session. None { session: { idleMinutes: 240 } } Is there a way to make a team of Moltbots one CEO and many agents Yes, via multi-agent routing and sub-agents. You can create one coordinator agent and several worker agents with their own workspaces and models.
That said, this is best seen as a fun experiment. It is token heavy and often less efficient than using one bot with separate sessions. The typical model we envision is one bot you talk to, with different sessions for parallel work.
That bot can also spawn sub-agents when needed. Docs: Multi-agent routing, Sub-agents, Agents CLI. ## Page 432 Why did context get truncated midtask How do I prevent it Session context is limited by the model window. Long chats, large tool outputs, or many files can trigger compaction or truncation.
What helps: Ask the bot to summarize the current state and write it to a file. Use /compact before long tasks, and /new when switching topics. Keep important context in the workspace and ask the bot to read it back. Use sub-agents for long or parallel work so the main chat stays smaller.
Pick a model with a larger context window if this happens often. How do I completely reset Moltbot but keep it installed Use the reset command: None moltbot reset ## Page 433 Non-interactive full reset: None moltbot reset --scope full --yes --non-interactive Then re-run onboarding: None moltbot onboard --install-daemon Notes: The onboarding wizard also offers Reset if it sees an existing config. See Wizard. If you used profiles (--profile / CLAWDBOT_PROFILE), reset each state dir (defaults are ~/.clawdbot-).
Dev reset: moltbot gateway --dev --reset (dev-only; wipes dev config + credentials + sessions + workspace). Im getting context too large errors how do I reset or compact Use one of these: ## Page 434 Compact (keeps the conversation but summarizes older turns): None /compact or /compact to guide the summary. Reset (fresh session ID for the same chat key): None /new /reset If it keeps happening: Enable or tune session pruning (agents.defaults.contextPruning) to trim old tool output. Use a model with a larger context window.
Docs: Compaction, Session pruning, Session management. Why am I seeing LLM request rejected messagesNcontentXtooluseinput Field required ## Page 435 This is a provider validation error: the model emitted a tool_use block without the required input. It usually means the session history is stale or corrupted (often after long threads or a tool/schema change). Fix: start a fresh session with /new (standalone message).
Why am I getting heartbeat messages every 30 minutes Heartbeats run every 30m by default. Tune or disable them: None { agents: { defaults: { heartbeat: { every: "2h" // or "0m" to disable } } } } If HEARTBEAT.md exists but is effectively empty (only blank lines and markdown headers like # Heading), Moltbot skips the heartbeat run to save API calls. If the file is missing, the heartbeat still runs and the model decides what to do. ## Page 436 Per-agent overrides use agents.list[].heartbeat.
Docs: Heartbeat. Do I need to add a bot account to a WhatsApp group No. Moltbot runs on your own account, so if you’re in the group, Moltbot can see it. By default, group replies are blocked until you allow senders (groupPolicy: "allowlist").
If you want only you to be able to trigger group replies: None { channels: { whatsapp: { groupPolicy: "allowlist", groupAllowFrom: ["+15551234567"] } } } How do I get the JID of a WhatsApp group ## Page 437 Option 1 (fastest): tail logs and send a test message in the group: None moltbot logs --follow --json Look for chatId (or from) ending in @g.us, like: 1234567890-1234567890@g.us. Option 2 (if already configured/allowlisted): list groups from config: None moltbot directory groups list --channel whatsapp Docs: WhatsApp, Directory, Logs. Why doesnt Moltbot reply in a group Two common causes: Mention gating is on (default). You must @mention the bot (or match mentionPatterns).
Page 438 You configured channels.whatsapp.groups without "*" and the group isn’t allowlisted. See Groups and Group messages. Do groupsthreads share context with DMs Direct chats collapse to the main session by default. Groups/channels have their own session keys, and Telegram topics / Discord threads are separate sessions.
See Groups and Group messages. How many workspaces and agents can I create No hard limits. Dozens (even hundreds) are fine, but watch for: Disk growth: sessions + transcripts live under ~/.clawdbot/agents//sessions/. Token cost: more agents means more concurrent model usage.
Ops overhead: per-agent auth profiles, workspaces, and channel routing. Tips: Keep one active workspace per agent (agents.defaults.workspace). Prune old sessions (delete JSONL or store entries) if disk grows. ## Page 439 Use moltbot doctor to spot stray workspaces and profile mismatches.
Can I run multiple bots or chats at the same time Slack and how should I set that up Yes. Use Multi‑Agent Routing to run multiple isolated agents and route inbound messages by channel/account/peer. Slack is supported as a channel and can be bound to specific agents. Browser access is powerful but not “do anything a human can” - anti‑bot, CAPTCHAs, and MFA can still block automation.
For the most reliable browser control, use the Chrome extension relay on the machine that runs the browser (and keep the Gateway anywhere). Best‑practice setup: Always‑on Gateway host (VPS/Mac mini). One agent per role (bindings). Slack channel(s) bound to those agents.
Local browser via extension relay (or a node) when needed. Docs: Multi‑Agent Routing, Slack, Browser, Chrome extension, Nodes. Models: defaults, selection, aliases, switching ## Page 440 What is the default model Moltbot’s default model is whatever you set as: None agents.defaults.model.primary Models are referenced as provider/model (example: anthropic/claude-opus-4-5). If you omit the provider, Moltbot currently assumes anthropic as a temporary deprecation fallback - but you should still explicitly set provider/model.
What model do you recommend Recommended default: anthropic/claude-opus-4-5. Good alternative: anthropic/claude-sonnet-4-5. Reliable (less character): openai/gpt-5.2 - nearly as good as Opus, just less personality. Budget: zai/glm-4.7.
Page 441 MiniMax M2.1 has its own docs: MiniMax and Local models. Rule of thumb: use the best model you can afford for high-stakes work, and a cheaper model for routine chat or summaries. You can route models per agent and use sub-agents to parallelize long tasks (each sub-agent consumes tokens). See Models and Sub-agents.
Strong warning: weaker/over-quantized models are more vulnerable to prompt injection and unsafe behavior. See Security. More context: Models. Can I use selfhosted models llamacpp vLLM Ollama Yes.
If your local server exposes an OpenAI-compatible API, you can point a custom provider at it. Ollama is supported directly and is the easiest path.
Security note: smaller or heavily quantized models are more vulnerable to prompt injection. We strongly recommend large models for any bot that can use tools. If you still want small models, enable sandboxing and strict tool allowlists. Docs: Ollama, Local models, Model providers, Security, Sandboxing.
## Page 442 How do I switch models without wiping my config Use model commands or edit only the model fields. Avoid full config replaces. Safe options: /model in chat (quick, per-session) moltbot models set ... (updates just model config) moltbot configure --section models (interactive) edit agents.defaults.model in ~/.clawdbot/moltbot.json Avoid config.apply with a partial object unless you intend to replace the whole config.
If you did overwrite config, restore from backup or re-run moltbot doctor to repair. Docs: Models, Configure, Config, Doctor. What do Clawd Flawd and Krill use for models Clawd + Flawd: Anthropic Opus (anthropic/claude-opus-4-5) - see Anthropic. Krill: MiniMax M2.1 (minimax/MiniMax-M2.1) - see MiniMax.
How do I switch models on the fly without restarting ## Page 443 Use the /model command as a standalone message: None /model sonnet /model haiku /model opus /model gpt /model gpt-mini /model gemini /model gemini-flash You can list available models with /model, /model list, or /model status. /model (and /model list) shows a compact, numbered picker. Select by number: None /model 3 You can also force a specific auth profile for the provider (per session): None /model opus@anthropic:default /model opus@anthropic:work ## Page 444 Tip: /model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next. It also shows the configured provider endpoint (baseUrl) and API mode (api) when available.
How do I unpin a profile I set with profile Re-run /model without the @profile suffix: None /model anthropic/claude-opus-4-5 If you want to return to the default, pick it from /model (or send /model <default provider/model>). Use /model status to confirm which auth profile is active. Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding Yes. Set one as default and switch as needed: ## Page 445 Quick switch (per session): /model gpt-5.2 for daily tasks, /model gpt-5.2-codex for coding.
Default + switch: set agents.defaults.model.primary to openai-codex/gpt-5.2, then switch to openai-codex/gpt-5.2-codex when coding (or the other way around). Sub-agents: route coding tasks to sub-agents with a different default model. See Models and Slash commands. Why do I see Model is not allowed and then no reply If agents.defaults.models is set, it becomes the allowlist for /model and any session overrides.
Choosing a model that isn’t in that list returns: None Model "provider/model" is not allowed. Use /model to list available models.
That error is returned instead of a normal reply. Fix: add the model to agents.defaults.models, remove the allowlist, or pick a model from /model list. ## Page 446 Why do I see Unknown model minimaxMiniMaxM21 This means the provider isn’t configured (no MiniMax provider config or auth profile was found), so the model can’t be resolved. A fix for this detection is in 2026.1.12 (unreleased at the time of writing).
Fix checklist: 1.Upgrade to 2026.1.12 (or run from source main), then restart the gateway. 2.Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key exists in env/auth profiles so the provider can be injected. 3.Use the exact model id (case‑sensitive): minimax/MiniMax-M2.1 or minimax/MiniMax-M2.1-lightning. 4.Run: None 5. moltbot models list 6. 7. and pick from the list (or /model list in chat).
See MiniMax and Models. Can I use MiniMax as my default and OpenAI for complex tasks ## Page 447 Yes. Use MiniMax as the default and switch models per session when needed. Fallbacks are for errors, not “hard tasks,” so use /model or a separate agent.
Option A: switch per session None { env: { MINIMAX_API_KEY: "sk-...", OPENAI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" }, models: { "minimax/MiniMax-M2.1": { alias: "minimax" }, "openai/gpt-5.2": { alias: "gpt" } } } } } Then: None /model gpt Option B: separate agents Agent A default: MiniMax ## Page 448 Agent B default: OpenAI Route by agent or use /agent to switch Docs: Models, Multi-Agent Routing, MiniMax, OpenAI. Are opus sonnet gpt builtin shortcuts Yes. Moltbot ships a few default shorthands (only applied when the model exists in agents.defaults.models): opus → anthropic/claude-opus-4-5 sonnet → anthropic/claude-sonnet-4-5 gpt → openai/gpt-5.2 gpt-mini → openai/gpt-5-mini gemini → google/gemini-3-pro-preview gemini-flash → google/gemini-3-flash-preview If you set your own alias with the same name, your value wins. How do I defineoverride model shortcuts aliases Aliases come from agents.defaults.models..alias.
Example: ## Page 449 None { agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" }, models: { "anthropic/claude-opus-4-5": { alias: "opus" }, "anthropic/claude-sonnet-4-5": { alias: "sonnet" }, "anthropic/claude-haiku-4-5": { alias: "haiku" } } } } } Then /model sonnet (or / when supported) resolves to that model ID. How do I add models from other providers like OpenRouter or ZAI OpenRouter (pay‑per‑token; many models): None { ## Page 450 agents: { defaults: { model: { primary: "openrouter/anthropic/claude-sonnet-4-5" }, models: { "openrouter/anthropic/claude-sonnet-4-5": {} } } }, env: { OPENROUTER_API_KEY: "sk-or-..." } } Z.AI (GLM models): None { agents: { defaults: { model: { primary: "zai/glm-4.7" }, models: { "zai/glm-4.7": {} } } }, env: { ZAI_API_KEY: "..." } } If you reference a provider/model but the required provider key is missing, you’ll get a runtime auth error (e.g. No API key found for provider "zai"). No API key found for provider after adding a new agent ## Page 451 This usually means the new agent has an empty auth store.
Auth is per-agent and stored in: None ~/.clawdbot/agents//agent/auth-profiles.json Fix options: Run moltbot agents add and configure auth during the wizard. Or copy auth-profiles.json from the main agent’s agentDir into the new agent’s agentDir. Do not reuse agentDir across agents; it causes auth/session collisions. Model failover and “All models failed” How does failover work Failover happens in two stages: 1.Auth profile rotation within the same provider.
2.Model fallback to the next model in agents.defaults.model.fallbacks. ## Page 452 Cooldowns apply to failing profiles (exponential backoff), so Moltbot can keep responding even when a provider is rate‑limited or temporarily failing. What does this error mean None No credentials found for profile "anthropic:default" It means the system attempted to use the auth profile ID anthropic:default, but could not find credentials for it in the expected auth store. Fix checklist for No credentials found for profile anthropicdefault Confirm where auth profiles live (new vs legacy paths) Current: ~/.clawdbot/agents//agent/auth-profiles.json Legacy: ~/.clawdbot/agent/* (migrated by moltbot doctor) Confirm your env var is loaded by the Gateway ## Page 453 If you set ANTHROPIC_API_KEY in your shell but run the Gateway via systemd/launchd, it may not inherit it.
Put it in ~/.clawdbot/.env or enable env.shellEnv. Make sure you’re editing the correct agent Multi‑agent setups mean there can be multiple auth-profiles.json files. Sanity‑check model/auth status Use moltbot models status to see configured models and whether providers are authenticated. Fix checklist for No credentials found for profile anthropic This means the run is pinned to an Anthropic auth profile, but the Gateway can’t find it in its auth store.
Use a setup-token Run claude setup-token, then paste it with moltbot models auth setup-token --provider anthropic. If the token was created on another machine, use moltbot models auth paste-token --provider anthropic. If you want to use an API key instead Put ANTHROPIC_API_KEY in ~/.clawdbot/.env on the gateway host. Clear any pinned order that forces a missing profile: None moltbot models auth order clear --provider anthropic Confirm you’re running commands on the gateway host In remote mode, auth profiles live on the gateway machine, not your laptop.
Page 454 Why did it also try Google Gemini and fail If your model config includes Google Gemini as a fallback (or you switched to a Gemini shorthand), Moltbot will try it during model fallback. If you haven’t configured Google credentials, you’ll see No API key found for provider "google". Fix: either provide Google auth, or remove/avoid Google models in agents.defaults.model.fallbacks / aliases so fallback doesn’t route there. LLM request rejected message thinking signature required google antigravity Cause: the session history contains thinking blocks without signatures (often from an aborted/partial stream).
Google Antigravity requires signatures for thinking blocks. Fix: Moltbot now strips unsigned thinking blocks for Google Antigravity Claude. If it still appears, start a new session or set /thinking off for that agent. Auth profiles: what they are and how to manage them Related: /concepts/oauth (OAuth flows, token storage, multi-account patterns) ## Page 455 What is an auth profile An auth profile is a named credential record (OAuth or API key) tied to a provider.
Profiles live in: None ~/.clawdbot/agents//agent/auth-profiles.json What are typical profile IDs Moltbot uses provider‑prefixed IDs like: anthropic:default (common when no email identity exists) anthropic: for OAuth identities custom IDs you choose (e.g. anthropic:work) Can I control which auth profile is tried first ## Page 456 Yes. Config supports optional metadata for profiles and an ordering per provider (auth.order.).
This does not store secrets; it maps IDs to provider/mode and sets rotation order. Moltbot may temporarily skip a profile if it’s in a short cooldown (rate limits/timeouts/auth failures) or a longer disabled state (billing/insufficient credits). To inspect this, run moltbot models status --json and check auth.unusableProfiles. Tuning: auth.cooldowns.billingBackoffHours*.
You can also set a per-agent order override (stored in that agent’s auth-profiles.json) via the CLI: None # Defaults to the configured default agent (omit --agent) moltbot models auth order get --provider anthropic # Lock rotation to a single profile (only try this one) moltbot models auth order set --provider anthropic anthropic:default # Or set an explicit order (fallback within provider) moltbot models auth order set --provider anthropic anthropic:work anthropic:default # Clear override (fall back to config auth.order / round-robin) moltbot models auth order clear --provider anthropic To target a specific agent: ## Page 457 None moltbot models auth order set --provider anthropic --agent main anthropic:default OAuth vs API key whats the difference Moltbot supports both: OAuth often leverages subscription access (where applicable). API keys use pay‑per‑token billing. The wizard explicitly supports Anthropic setup-token and OpenAI Codex OAuth and can store API keys for you. Gateway: ports, “already running”, and remote mode What port does the Gateway use gateway.port controls the single multiplexed port for WebSocket + HTTP (Control UI, hooks, etc.).
Page 458 Precedence: None --port > CLAWDBOT_GATEWAY_PORT > gateway.port > default 18789 Why does moltbot gateway status say Runtime running but RPC probe failed Because “running” is the supervisor’s view (launchd/systemd/schtasks). The RPC probe is the CLI actually connecting to the gateway WebSocket and calling status. Use moltbot gateway status and trust these lines: Probe target: (the URL the probe actually used) Listening: (what’s actually bound on the port) Last gateway error: (common root cause when the process is alive but the port isn’t listening) ## Page 459 Why does moltbot gateway status show Config cli and Config service different You’re editing one config file while the service is running another (often a --profile / CLAWDBOT_STATE_DIR mismatch). Fix: None moltbot gateway install --force Run that from the same --profile / environment you want the service to use.
What does another gateway instance is already listening mean Moltbot enforces a runtime lock by binding the WebSocket listener immediately on startup (default ws://127.0.0.1:18789). If the bind fails with EADDRINUSE, it throws GatewayLockError indicating another instance is already listening. Fix: stop the other instance, free the port, or run with moltbot gateway --port . ## Page 460 How do I run Moltbot in remote mode client connects to a Gateway elsewhere Set gateway.mode: "remote" and point to a remote WebSocket URL, optionally with a token/password: None { gateway: { mode: "remote", remote: { url: "ws://gateway.tailnet:18789", token: "your-token", password: "your-password" } } } Notes: moltbot gateway only starts when gateway.mode is local (or you pass the override flag).
The macOS app watches the config file and switches modes live when these values change. ## Page 461 The Control UI says unauthorized or keeps reconnecting What now Your gateway is running with auth enabled (gateway.auth.*), but the UI is not sending the matching token/password. Facts (from code): The Control UI stores the token in browser localStorage key moltbot.control.settings.v1. The UI can import ?token=...
(and/or ?password=...) once, then strips it from the URL. Fix: Fastest: moltbot dashboard (prints + copies tokenized link, tries to open; shows SSH hint if headless). If you don’t have a token yet: moltbot doctor --generate-gateway-token. If remote, tunnel first: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/?token=....
Set gateway.auth.token (or CLAWDBOT_GATEWAY_TOKEN) on the gateway host. In the Control UI settings, paste the same token (or refresh with a one-time ?token=... link). Still stuck?
Run moltbot status --all and follow Troubleshooting. See Dashboard for auth details. I set gatewaybind tailnet but it cant bind nothing listens ## Page 462 tailnet bind picks a Tailscale IP from your network interfaces (100.64.0.0/10). If the machine isn’t on Tailscale (or the interface is down), there’s nothing to bind to.
Fix: Start Tailscale on that host (so it has a 100.x address), or Switch to gateway.bind: "loopback" / "lan".
Note: tailnet is explicit. auto prefers loopback; use gateway.bind: "tailnet" when you want a tailnet-only bind. Can I run multiple Gateways on the same host Usually no - one Gateway can run multiple messaging channels and agents. Use multiple Gateways only when you need redundancy (ex: rescue bot) or hard isolation.
Yes, but you must isolate: CLAWDBOT_CONFIG_PATH (per‑instance config) CLAWDBOT_STATE_DIR (per‑instance state) agents.defaults.workspace (workspace isolation) gateway.port (unique ports) Quick setup (recommended): Use moltbot --profile … per instance (auto-creates ~/.clawdbot-). ## Page 463 Set a unique gateway.port in each profile config (or pass --port for manual runs). Install a per-profile service: moltbot --profile gateway install. Profiles also suffix service names (com.clawdbot., moltbot-gateway-.service, Moltbot Gateway ()).
Full guide: Multiple gateways. What does invalid handshake code 1008 mean The Gateway is a WebSocket server, and it expects the very first message to be a connect frame. If it receives anything else, it closes the connection with code 1008 (policy violation). Common causes: You opened the HTTP URL in a browser (http://...) instead of a WS client.
You used the wrong port or path. A proxy or tunnel stripped auth headers or sent a non‑Gateway request. Quick fixes: 1.Use the WS URL: ws://:18789 (or wss://... if HTTPS).
2.Don’t open the WS port in a normal browser tab. 3.If auth is on, include the token/password in the connect frame. If you’re using the CLI or TUI, the URL should look like: ## Page 464 None moltbot tui --url ws://:18789 --token Protocol details: Gateway protocol. Logging and debugging Where are logs File logs (structured): None /tmp/moltbot/moltbot-YYYY-MM-DD.log You can set a stable path via logging.file.
File log level is controlled by logging.level. Console verbosity is controlled by --verbose and logging.consoleLevel. Fastest log tail: ## Page 465 None moltbot logs --follow Service/supervisor logs (when the gateway runs via launchd/systemd): macOS: $CLAWDBOT_STATE_DIR/logs/gateway.log and gateway.err.log (default: ~/.clawdbot/logs/...; profiles use ~/.clawdbot-/logs/...) Linux: journalctl --user -u moltbot-gateway[-].service -n 200 --no-pager Windows: schtasks /Query /TN "Moltbot Gateway ()" /V /FO LIST See Troubleshooting for more. How do I startstoprestart the Gateway service Use the gateway helpers: None moltbot gateway status moltbot gateway restart ## Page 466 If you run the gateway manually, moltbot gateway --force can reclaim the port.
See Gateway. I closed my terminal on Windows how do I restart Moltbot There are two Windows install modes: 1) WSL2 (recommended): the Gateway runs inside Linux. Open PowerShell, enter WSL, then restart: None wsl moltbot gateway status moltbot gateway restart If you never installed the service, start it in the foreground: None moltbot gateway run 2) Native Windows (not recommended): the Gateway runs directly in Windows. ## Page 467 Open PowerShell and run: None moltbot gateway status moltbot gateway restart If you run it manually (no service), use: None moltbot gateway run Docs: Windows (WSL2), Gateway service runbook.
The Gateway is up but replies never arrive What should I check Start with a quick health sweep: None moltbot status ## Page 468 moltbot models status moltbot channels status moltbot logs --follow Common causes: Model auth not loaded on the gateway host (check models status). Channel pairing/allowlist blocking replies (check channel config + logs). WebChat/Dashboard is open without the right token. If you are remote, confirm the tunnel/Tailscale connection is up and that the Gateway WebSocket is reachable.
Docs: Channels, Troubleshooting, Remote access. Disconnected from gateway no reason what now This usually means the UI lost the WebSocket connection. Check: 1.Is the Gateway running? moltbot gateway status 2.Is the Gateway healthy?
moltbot status 3.Does the UI have the right token? moltbot dashboard 4.If remote, is the tunnel/Tailscale link up?
Then tail logs: ## Page 469 None moltbot logs --follow Docs: Dashboard, Remote access, Troubleshooting. Telegram setMyCommands fails with network errors What should I check Start with logs and channel status: None moltbot channels status moltbot channels logs --channel telegram If you are on a VPS or behind a proxy, confirm outbound HTTPS is allowed and DNS works. If the Gateway is remote, make sure you are looking at logs on the Gateway host. Docs: Telegram, Channel troubleshooting.
Page 470 TUI shows no output What should I check First confirm the Gateway is reachable and the agent can run: None moltbot status moltbot models status moltbot logs --follow In the TUI, use /status to see the current state. If you expect replies in a chat channel, make sure delivery is enabled (/deliver on). Docs: TUI, Slash commands. How do I completely stop then start the Gateway If you installed the service: None moltbot gateway stop ## Page 471 moltbot gateway start This stops/starts the supervised service (launchd on macOS, systemd on Linux).
Use this when the Gateway runs in the background as a daemon. If you’re running in the foreground, stop with Ctrl‑C, then: None moltbot gateway run Docs: Gateway service runbook. ELI5 moltbot gateway restart vs moltbot gateway moltbot gateway restart: restarts the background service (launchd/systemd). moltbot gateway: runs the gateway in the foreground for this terminal session.
If you installed the service, use the gateway commands. Use moltbot gateway when you want a one-off, foreground run. ## Page 472 Whats the fastest way to get more details when something fails Start the Gateway with --verbose to get more console detail.
Then inspect the log file for channel auth, model routing, and RPC errors. Media & attachments My skill generated an imagePDF but nothing was sent Outbound attachments from the agent must include a MEDIA: line (on its own line). See Moltbot assistant setup and Agent send. CLI sending: None moltbot message send --target +15555550123 --message "Here you go" --media /path/to/file.png Also check: ## Page 473 The target channel supports outbound media and isn’t blocked by allowlists.
The file is within the provider’s size limits (images are resized to max 2048px). See Images. Security and access control Is it safe to expose Moltbot to inbound DMs Treat inbound DMs as untrusted input. Defaults are designed to reduce risk: Default behavior on DM‑capable channels is pairing: Unknown senders receive a pairing code; the bot does not process their message.
Approve with: moltbot pairing approve
Pending requests are capped at 3 per channel; check moltbot pairing list if a code didn’t arrive. Opening DMs publicly requires explicit opt‑in (dmPolicy: "open" and allowlist "*"). Run moltbot doctor to surface risky DM policies. ## Page 474 Is prompt injection only a concern for public bots No.
Prompt injection is about untrusted content, not just who can DM the bot. If your assistant reads external content (web search/fetch, browser pages, emails, docs, attachments, pasted logs), that content can include instructions that try to hijack the model.
This can happen even if you are the only sender. The biggest risk is when tools are enabled: the model can be tricked into exfiltrating context or calling tools on your behalf. Reduce the blast radius by: using a read-only or tool-disabled “reader” agent to summarize untrusted content keeping web_search / web_fetch / browser off for tool-enabled agents sandboxing and strict tool allowlists Details: Security. Should my bot have its own email GitHub account or phone number Yes, for most setups.
Isolating the bot with separate accounts and phone numbers reduces the blast radius if something goes wrong.
This also makes it easier to rotate credentials or revoke access without impacting your personal accounts. ## Page 475 Start small. Give access only to the tools and accounts you actually need, and expand later if required. Docs: Security, Pairing.
Can I give it autonomy over my text messages and is that safe We do not recommend full autonomy over your personal messages. The safest pattern is: Keep DMs in pairing mode or a tight allowlist. Use a separate number or account if you want it to message on your behalf. Let it draft, then approve before sending.
If you want to experiment, do it on a dedicated account and keep it isolated. See Security. Can I use cheaper models for personal assistant tasks Yes, if the agent is chat-only and the input is trusted. Smaller tiers are more susceptible to instruction hijacking, so avoid them for tool-enabled agents or when reading untrusted content.
If you ## Page 476 must use a smaller model, lock down tools and run inside a sandbox. See Security. I ran start in Telegram but didnt get a pairing code Pairing codes are sent only when an unknown sender messages the bot and dmPolicy: "pairing" is enabled. /start by itself doesn’t generate a code.
Check pending requests: None moltbot pairing list telegram If you want immediate access, allowlist your sender id or set dmPolicy: "open" for that account. WhatsApp will it message my contacts How does pairing work ## Page 477 No. Default WhatsApp DM policy is pairing. Unknown senders only get a pairing code and their message is not processed.
Moltbot only replies to chats it receives or to explicit sends you trigger. Approve pairing with: None moltbot pairing approve whatsapp List pending requests: None moltbot pairing list whatsapp Wizard phone number prompt: it’s used to set your allowlist/owner so your own DMs are permitted. It’s not used for auto-sending. If you run on your personal WhatsApp number, use that number and enable channels.whatsapp.selfChatMode.
Chat commands, aborting tasks, and “it won’t stop” ## Page 478 How do I stop internal system messages from showing in chat Most internal or tool messages only appear when verbose or reasoning is enabled for that session. Fix in the chat where you see it: None /verbose off /reasoning off If it is still noisy, check the session settings in the Control UI and set verbose to inherit.
Also confirm you are not using a bot profile with verboseDefault set to on in config. Docs: Thinking and verbose, Security. How do I stopcancel a running task Send any of these as a standalone message (no slash): None ## Page 479 stop abort esc wait exit interrupt These are abort triggers (not slash commands). For background processes (from the exec tool), you can ask the agent to run: None process action:kill sessionId:XXX Slash commands overview: see Slash commands.
Most commands must be sent as a standalone message that starts with /, but a few shortcuts (like /status) also work inline for allowlisted senders. ## Page 480 How do I send a Discord message from Telegram Crosscontext messaging denied Moltbot blocks cross‑provider messaging by default. If a tool call is bound to Telegram, it won’t send to Discord unless you explicitly allow it. Enable cross‑provider messaging for the agent: None { agents: { defaults: { tools: { message: { crossContext: { allowAcrossProviders: true, marker: { enabled: true, prefix: "[from {channel}] " } } } } } } } Restart the gateway after editing config.
If you only want this for a single agent, set it under agents.list[].tools.message instead. ## Page 481 Why does it feel like the bot ignores rapidfire messages Queue mode controls how new messages interact with an in‑flight run. Use /queue to change modes: steer - new messages redirect the current task followup - run messages one at a time collect - batch messages and reply once (default) steer-backlog - steer now, then process backlog interrupt - abort current run and start fresh You can add options like debounce:2s cap:25 drop:summarize for followup modes. Answer the exact question from the screenshot/chat log Q: “What’s the default model for Anthropic with an API key?” A: In Moltbot, credentials and model selection are separate.
Setting ANTHROPIC_API_KEY (or storing an Anthropic API key in auth profiles) enables authentication, but the actual default model is whatever you configure in agents.defaults.model.primary (for example, anthropic/claude-sonnet-4-5 or anthropic/claude-opus-4-5). If you see No credentials found for profile "anthropic:default", it means the Gateway couldn’t find Anthropic credentials in the expected auth-profiles.json for the agent that’s running. ## Page 482 Still stuck? Ask in Discord or open a GitHub discussion.
Troubleshooting Install