nanobot

nanobot: Ultra-Lightweight Personal AI Assistant

PyPI Downloads Python License Feishu WeChat Discord

๐Ÿˆ **nanobot** is an **ultra-lightweight** personal AI assistant inspired by [OpenClaw](https://github.com/openclaw/openclaw). โšก๏ธ Delivers core agent functionality with **99% fewer lines of code** than OpenClaw. ๐Ÿ“ Real-time line count: run `bash core_agent_lines.sh` to verify anytime. ## ๐Ÿ“ข News - **2026-03-16** ๐Ÿš€ Released **v0.1.4.post5** โ€” a refinement-focused release with stronger reliability and channel support, and a more dependable day-to-day experience. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post5) for details. - **2026-03-15** ๐Ÿงฉ DingTalk rich media, smarter built-in skills, and cleaner model compatibility. - **2026-03-14** ๐Ÿ’ฌ Channel plugins, Feishu replies, and steadier MCP, QQ, and media handling. - **2026-03-13** ๐ŸŒ Multi-provider web search, LangSmith, and broader reliability improvements. - **2026-03-12** ๐Ÿš€ VolcEngine support, Telegram reply context, `/restart`, and sturdier memory. - **2026-03-11** ๐Ÿ”Œ WeCom, Ollama, cleaner discovery, and safer tool behavior. - **2026-03-10** ๐Ÿง  Token-based memory, shared retries, and cleaner gateway and Telegram behavior. - **2026-03-09** ๐Ÿ’ฌ Slack thread polish and better Feishu audio compatibility. - **2026-03-08** ๐Ÿš€ Released **v0.1.4.post4** โ€” a reliability-packed release with safer defaults, better multi-instance support, sturdier MCP, and major channel and provider improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post4) for details. - **2026-03-07** ๐Ÿš€ Azure OpenAI provider, WhatsApp media, QQ group chats, and more Telegram/Feishu polish. - **2026-03-06** ๐Ÿช„ Lighter providers, smarter media handling, and sturdier memory and CLI compatibility.
Earlier news - **2026-03-05** โšก๏ธ Telegram draft streaming, MCP SSE support, and broader channel reliability fixes. - **2026-03-04** ๐Ÿ› ๏ธ Dependency cleanup, safer file reads, and another round of test and Cron fixes. - **2026-03-03** ๐Ÿง  Cleaner user-message merging, safer multimodal saves, and stronger Cron guards. - **2026-03-02** ๐Ÿ›ก๏ธ Safer default access control, sturdier Cron reloads, and cleaner Matrix media handling. - **2026-03-01** ๐ŸŒ Web proxy support, smarter Cron reminders, and Feishu rich-text parsing improvements. - **2026-02-28** ๐Ÿš€ Released **v0.1.4.post3** โ€” cleaner context, hardened session history, and smarter agent. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post3) for details. - **2026-02-27** ๐Ÿง  Experimental thinking mode support, DingTalk media messages, Feishu and QQ channel fixes. - **2026-02-26** ๐Ÿ›ก๏ธ Session poisoning fix, WhatsApp dedup, Windows path guard, Mistral compatibility. - **2026-02-25** ๐Ÿงน New Matrix channel, cleaner session context, auto workspace template sync. - **2026-02-24** ๐Ÿš€ Released **v0.1.4.post2** โ€” a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post2) for details. - **2026-02-23** ๐Ÿ”ง Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes. - **2026-02-22** ๐Ÿ›ก๏ธ Slack thread isolation, Discord typing fix, agent reliability improvements. - **2026-02-21** ๐ŸŽ‰ Released **v0.1.4.post1** โ€” new providers, media support across channels, and major stability improvements. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post1) for details. - **2026-02-20** ๐Ÿฆ Feishu now receives multimodal files from users. More reliable memory under the hood. - **2026-02-19** โœจ Slack now sends files, Discord splits long messages, and subagents work in CLI mode. - **2026-02-18** โšก๏ธ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching. - **2026-02-17** ๐ŸŽ‰ Released **v0.1.4** โ€” MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details. - **2026-02-16** ๐Ÿฆž nanobot now integrates a [ClawHub](https://clawhub.ai) skill โ€” search and install public agent skills. - **2026-02-15** ๐Ÿ”‘ nanobot now supports OpenAI Codex provider with OAuth login support. - **2026-02-14** ๐Ÿ”Œ nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details. - **2026-02-13** ๐ŸŽ‰ Released **v0.1.3.post7** โ€” includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details. - **2026-02-12** ๐Ÿง  Redesigned memory system โ€” Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it! - **2026-02-11** โœจ Enhanced CLI experience and added MiniMax support! - **2026-02-10** ๐ŸŽ‰ Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431). - **2026-02-09** ๐Ÿ’ฌ Added Slack, Email, and QQ support โ€” nanobot now supports multiple chat platforms! - **2026-02-08** ๐Ÿ”ง Refactored Providersโ€”adding a new LLM provider now takes just 2 simple steps! Check [here](#providers). - **2026-02-07** ๐Ÿš€ Released **v0.1.3.post5** with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details. - **2026-02-06** โœจ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening! - **2026-02-05** โœจ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support! - **2026-02-04** ๐Ÿš€ Released **v0.1.3.post4** with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details. - **2026-02-03** โšก Integrated vLLM for local LLM support and improved natural language task scheduling! - **2026-02-02** ๐ŸŽ‰ nanobot officially launched! Welcome to try ๐Ÿˆ nanobot!
> ๐Ÿˆ nanobot is for educational, research, and technical exchange purposes only. It is unrelated to crypto and does not involve any official token or coin. ## Key Features of nanobot: ๐Ÿชถ **Ultra-Lightweight**: A super lightweight implementation of OpenClaw โ€” 99% smaller, significantly faster. ๐Ÿ”ฌ **Research-Ready**: Clean, readable code that's easy to understand, modify, and extend for research. โšก๏ธ **Lightning Fast**: Minimal footprint means faster startup, lower resource usage, and quicker iterations. ๐Ÿ’Ž **Easy-to-Use**: One-click to deploy and you're ready to go. ## ๐Ÿ—๏ธ Architecture

nanobot architecture

## Table of Contents - [News](#-news) - [Key Features](#key-features-of-nanobot) - [Architecture](#๏ธ-architecture) - [Features](#-features) - [Install](#-install) - [Quick Start](#-quick-start) - [Chat Apps](#-chat-apps) - [Agent Social Network](#-agent-social-network) - [Configuration](#๏ธ-configuration) - [Multiple Instances](#-multiple-instances) - [CLI Reference](#-cli-reference) - [Docker](#-docker) - [Linux Service](#-linux-service) - [Project Structure](#-project-structure) - [Contribute & Roadmap](#-contribute--roadmap) - [Star History](#-star-history) ## โœจ Features

๐Ÿ“ˆ 24/7 Real-Time Market Analysis

๐Ÿš€ Full-Stack Software Engineer

๐Ÿ“… Smart Daily Routine Manager

๐Ÿ“š Personal Knowledge Assistant

Discovery โ€ข Insights โ€ข Trends Develop โ€ข Deploy โ€ข Scale Schedule โ€ข Automate โ€ข Organize Learn โ€ข Memory โ€ข Reasoning
## ๐Ÿ“ฆ Install **Install from source** (latest features, recommended for development) ```bash git clone https://github.com/HKUDS/nanobot.git cd nanobot pip install -e . ``` **Install with [uv](https://github.com/astral-sh/uv)** (stable, fast) ```bash uv tool install nanobot-ai ``` **Install from PyPI** (stable) ```bash pip install nanobot-ai ``` ### Update to latest version **PyPI / pip** ```bash pip install -U nanobot-ai nanobot --version ``` **uv** ```bash uv tool upgrade nanobot-ai nanobot --version ``` **Using WhatsApp?** Rebuild the local bridge after upgrading: ```bash rm -rf ~/.nanobot/bridge nanobot channels login whatsapp ``` ## ๐Ÿš€ Quick Start > [!TIP] > Set your API key in `~/.nanobot/config.json`. > Get API keys: [OpenRouter](https://openrouter.ai/keys) (Global) > > For other LLM providers, please see the [Providers](#providers) section. > > For web search capability setup (Brave Search or SearXNG), please see [Web Search](#web-search). **1. Initialize** ```bash nanobot onboard ``` Use `nanobot onboard --wizard` if you want the interactive setup wizard. **2. Configure** (`~/.nanobot/config.json`) Configure these **two parts** in your config (other options have defaults). *Set your API key* (e.g. OpenRouter, recommended for global users): ```json { "providers": { "openrouter": { "apiKey": "sk-or-v1-xxx" } } } ``` *Set your model* (optionally pin a provider โ€” defaults to auto-detection): ```json { "agents": { "defaults": { "model": "anthropic/claude-opus-4-5", "provider": "openrouter" } } } ``` **3. Chat** ```bash nanobot agent ``` That's it! You have a working AI assistant in 2 minutes. ### Optional: Web Search `web_search` supports both Brave Search and SearXNG. **Brave Search** ```json { "tools": { "web": { "search": { "provider": "brave", "apiKey": "your-brave-api-key" } } } } ``` **SearXNG** ```json { "tools": { "web": { "search": { "provider": "searxng", "baseUrl": "http://localhost:8080" } } } } ``` `baseUrl` can point either to the SearXNG root (for example `http://localhost:8080`) or directly to `/search`. ### Optional: Voice Replies Enable `channels.voiceReply` when you want nanobot to attach a synthesized voice reply on supported outbound channels such as Telegram. QQ voice replies are also supported when your TTS endpoint can return `silk`. ```json { "channels": { "voiceReply": { "enabled": true, "channels": ["telegram"], "url": "https://your-tts-endpoint.example.com/v1", "model": "gpt-4o-mini-tts", "voice": "alloy", "instructions": "keep the delivery calm and clear", "speed": 1.0, "responseFormat": "opus" } } } ``` `voiceReply` currently adds a voice attachment while keeping the normal text reply. For QQ voice delivery, use `responseFormat: "silk"` because QQ local voice upload expects `.silk`. If `apiKey` and `apiBase` are omitted, nanobot falls back to the active provider credentials; use an OpenAI-compatible TTS endpoint for this. `voiceReply.url` is optional and can point either to a provider base URL such as `https://api.openai.com/v1` or directly to an `/audio/speech` endpoint. If omitted, nanobot uses the current conversation provider URL. `apiBase` remains supported as a legacy alias. Voice replies automatically follow the active session persona. nanobot builds TTS style instructions from that persona's `SOUL.md` and `USER.md`, so switching `/persona` changes both the text response style and the generated speech style together. If a specific persona needs a fixed voice or speaking pattern, add `VOICE.json` under the persona workspace: - Default persona: `/VOICE.json` - Custom persona: `/personas//VOICE.json` Example: ```json { "voice": "nova", "instructions": "sound crisp, confident, and slightly faster than normal", "speed": 1.15 } ``` ## ๐Ÿ’ฌ Chat Apps Connect nanobot to your favorite chat platform. Want to build your own? See the [Channel Plugin Guide](./docs/CHANNEL_PLUGIN_GUIDE.md). | Channel | What you need | |---------|---------------| | **Telegram** | Bot token from @BotFather | | **Discord** | Bot token + Message Content intent | | **WhatsApp** | QR code scan (`nanobot channels login whatsapp`) | | **WeChat (Weixin)** | QR code scan (`nanobot channels login weixin`) | | **Feishu** | App ID + App Secret | | **DingTalk** | App Key + App Secret | | **Slack** | Bot token + App-Level token | | **Matrix** | Homeserver URL + Access token | | **Email** | IMAP/SMTP credentials | | **QQ** | App ID + App Secret | | **Wecom** | Bot ID + Bot Secret | | **Mochat** | Claw token (auto-setup available) | Multi-bot support is available for `whatsapp`, `telegram`, `discord`, `feishu`, `mochat`, `dingtalk`, `slack`, `email`, `qq`, `matrix`, and `wecom`. Use `instances` when you want more than one bot/account for the same channel; each instance is routed as `channel/name`. ```json { "channels": { "telegram": { "enabled": true, "instances": [ { "name": "main", "token": "BOT_TOKEN_A", "allowFrom": ["YOUR_USER_ID"] }, { "name": "backup", "token": "BOT_TOKEN_B", "allowFrom": ["YOUR_USER_ID"] } ] } } } ``` For `whatsapp`, each instance should point to its own bridge process with its own `bridgeUrl` and bridge auth/session directory. Multi-instance notes: - Keep each `instances[].name` unique within the same channel. - Single-instance config is still supported; switch to `instances` only when you need multiple bots/accounts for the same channel. - Replies, sessions, and routing use `channel/name`, for example `telegram/main` or `qq/bot-a`. - `matrix` instances automatically use isolated `matrix-store/` directories. - `mochat` instances automatically use isolated runtime cursor directories. - `whatsapp` instances require separate bridge processes, typically with different `BRIDGE_PORT` and `AUTH_DIR` values. Example with two different multi-instance channels: ```json { "channels": { "telegram": { "enabled": true, "instances": [ { "name": "main", "token": "BOT_TOKEN_A", "allowFrom": ["YOUR_USER_ID"] }, { "name": "backup", "token": "BOT_TOKEN_B", "allowFrom": ["YOUR_USER_ID"] } ] }, "matrix": { "enabled": true, "instances": [ { "name": "ops", "homeserver": "https://matrix.org", "userId": "@bot-ops:matrix.org", "accessToken": "syt_ops", "deviceId": "OPS01", "allowFrom": ["@your_user:matrix.org"] }, { "name": "support", "homeserver": "https://matrix.org", "userId": "@bot-support:matrix.org", "accessToken": "syt_support", "deviceId": "SUPPORT01", "allowFrom": ["@your_user:matrix.org"] } ] } } } ```
Telegram (Recommended) **1. Create a bot** - Open Telegram, search `@BotFather` - Send `/newbot`, follow prompts - Copy the token **2. Configure** ```json { "channels": { "telegram": { "enabled": true, "token": "YOUR_BOT_TOKEN", "allowFrom": ["YOUR_USER_ID"] } } } ``` > You can find your **User ID** in Telegram settings. It is shown as `@yourUserId`. > Copy this value **without the `@` symbol** and paste it into the config file. **3. Run** ```bash nanobot gateway ```
Mochat (Claw IM) Uses **Socket.IO WebSocket** by default, with HTTP polling fallback. **1. Ask nanobot to set up Mochat for you** Simply send this message to nanobot (replace `xxx@xxx` with your real email): ``` Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat. ``` nanobot will automatically register, configure `~/.nanobot/config.json`, and connect to Mochat. **2. Restart gateway** ```bash nanobot gateway ``` That's it โ€” nanobot handles the rest!
Manual configuration (advanced) If you prefer to configure manually, add the following to `~/.nanobot/config.json`: > Keep `claw_token` private. It should only be sent in `X-Claw-Token` header to your Mochat API endpoint. ```json { "channels": { "mochat": { "enabled": true, "base_url": "https://mochat.io", "socket_url": "https://mochat.io", "socket_path": "/socket.io", "claw_token": "claw_xxx", "agent_user_id": "6982abcdef", "sessions": ["*"], "panels": ["*"], "reply_delay_mode": "non-mention", "reply_delay_ms": 120000 } } } ``` > Multi-account mode is also supported with `instances`; each instance keeps its Mochat runtime > cursors in its own state directory automatically.
Discord **1. Create a bot** - Go to https://discord.com/developers/applications - Create an application โ†’ Bot โ†’ Add Bot - Copy the bot token **2. Enable intents** - In the Bot settings, enable **MESSAGE CONTENT INTENT** - (Optional) Enable **SERVER MEMBERS INTENT** if you plan to use allow lists based on member data **3. Get your User ID** - Discord Settings โ†’ Advanced โ†’ enable **Developer Mode** - Right-click your avatar โ†’ **Copy User ID** **4. Configure** ```json { "channels": { "discord": { "enabled": true, "token": "YOUR_BOT_TOKEN", "allowFrom": ["YOUR_USER_ID"], "groupPolicy": "mention" } } } ``` > `groupPolicy` controls how the bot responds in group channels: > - `"mention"` (default) โ€” Only respond when @mentioned > - `"open"` โ€” Respond to all messages > DMs always respond when the sender is in `allowFrom`. **5. Invite the bot** - OAuth2 โ†’ URL Generator - Scopes: `bot` - Bot Permissions: `Send Messages`, `Read Message History` - Open the generated invite URL and add the bot to your server **6. Run** ```bash nanobot gateway ```
Matrix (Element) Install Matrix dependencies first: ```bash pip install nanobot-ai[matrix] ``` **1. Create/choose a Matrix account** - Create or reuse a Matrix account on your homeserver (for example `matrix.org`). - Confirm you can log in with Element. **2. Get credentials** - You need: - `userId` (example: `@nanobot:matrix.org`) - `accessToken` - `deviceId` (recommended so sync tokens can be restored across restarts) - You can obtain these from your homeserver login API (`/_matrix/client/v3/login`) or from your client's advanced session settings. **3. Configure** ```json { "channels": { "matrix": { "enabled": true, "homeserver": "https://matrix.org", "userId": "@nanobot:matrix.org", "accessToken": "syt_xxx", "deviceId": "NANOBOT01", "e2eeEnabled": true, "allowFrom": ["@your_user:matrix.org"], "groupPolicy": "open", "groupAllowFrom": [], "allowRoomMentions": false, "maxMediaBytes": 20971520 } } } ``` > Keep a persistent `matrix-store` and stable `deviceId` โ€” encrypted session state is lost if these change across restarts. > In multi-account mode, nanobot isolates each instance into its own `matrix-store/` > directory automatically. | Option | Description | |--------|-------------| | `allowFrom` | User IDs allowed to interact. Empty denies all; use `["*"]` to allow everyone. | | `groupPolicy` | `open` (default), `mention`, or `allowlist`. | | `groupAllowFrom` | Room allowlist (used when policy is `allowlist`). | | `allowRoomMentions` | Accept `@room` mentions in mention mode. | | `e2eeEnabled` | E2EE support (default `true`). Set `false` for plaintext-only. | | `maxMediaBytes` | Max attachment size (default `20MB`). Set `0` to block all media. | **4. Run** ```bash nanobot gateway ```
WhatsApp Requires **Node.js โ‰ฅ18**. **1. Link device** ```bash nanobot channels login whatsapp # Scan QR with WhatsApp โ†’ Settings โ†’ Linked Devices ``` **2. Configure** ```json { "channels": { "whatsapp": { "enabled": true, "allowFrom": ["+1234567890"] } } } ``` > Multi-bot mode is supported with `instances`, but each bot must connect to its own bridge > process. Run separate bridge processes with different `BRIDGE_PORT` and `AUTH_DIR`, then point > each instance at its own `bridgeUrl`. **3. Run** (two terminals) ```bash # Terminal 1 nanobot channels login whatsapp # Terminal 2 nanobot gateway ``` > WhatsApp bridge updates are not applied automatically for existing installations. > After upgrading nanobot, rebuild the local bridge with: > `rm -rf ~/.nanobot/bridge && nanobot channels login whatsapp`
Feishu (้ฃžไนฆ) Uses **WebSocket** long connection โ€” no public IP required. **1. Create a Feishu bot** - Visit [Feishu Open Platform](https://open.feishu.cn/app) - Create a new app โ†’ Enable **Bot** capability - **Permissions**: Add `im:message` (send messages) and `im:message.p2p_msg:readonly` (receive messages) - **Events**: Add `im.message.receive_v1` (receive messages) - Select **Long Connection** mode (requires running nanobot first to establish connection) - Get **App ID** and **App Secret** from "Credentials & Basic Info" - Publish the app **2. Configure** ```json { "channels": { "feishu": { "enabled": true, "appId": "cli_xxx", "appSecret": "xxx", "encryptKey": "", "verificationToken": "", "allowFrom": ["ou_YOUR_OPEN_ID"], "groupPolicy": "mention" } } } ``` > `encryptKey` and `verificationToken` are optional for Long Connection mode. > `allowFrom`: Add your open_id (find it in nanobot logs when you message the bot). Use `["*"]` to allow all users. > `groupPolicy`: `"mention"` (default โ€” respond only when @mentioned), `"open"` (respond to all group messages). Private chats always respond. **3. Run** ```bash nanobot gateway ``` > [!TIP] > Feishu uses WebSocket to receive messages โ€” no webhook or public IP needed!
QQ (QQๅ•่Š) Uses **botpy SDK** with WebSocket โ€” no public IP required. Currently supports **private messages only**. **1. Register & create bot** - Visit [QQ Open Platform](https://q.qq.com) โ†’ Register as a developer (personal or enterprise) - Create a new bot application - Go to **ๅผ€ๅ‘่ฎพ็ฝฎ (Developer Settings)** โ†’ copy **AppID** and **AppSecret** **2. Set up sandbox for testing** - In the bot management console, find **ๆฒ™็ฎฑ้…็ฝฎ (Sandbox Config)** - Under **ๅœจๆถˆๆฏๅˆ—่กจ้…็ฝฎ**, click **ๆทปๅŠ ๆˆๅ‘˜** and add your own QQ number - Once added, scan the bot's QR code with mobile QQ โ†’ open the bot profile โ†’ tap "ๅ‘ๆถˆๆฏ" to start chatting **3. Configure** > - `allowFrom`: Add your openid (find it in nanobot logs when you message the bot). Use `["*"]` for public access. > - For production: submit a review in the bot console and publish. See [QQ Bot Docs](https://bot.q.qq.com/wiki/) for the full publishing flow. > - Single-bot config is still supported. For multiple bots, use `instances`, and each bot is routed as `qq/`. ```json { "channels": { "qq": { "enabled": true, "appId": "YOUR_APP_ID", "secret": "YOUR_APP_SECRET", "allowFrom": ["YOUR_OPENID"], "mediaBaseUrl": "https://files.example.com/out/" } } } ``` For local QQ media, nanobot uploads files directly with `file_data` from generated delivery artifacts under `workspace/out`. Local uploads do not require `mediaBaseUrl`, and nanobot does not fall back to URL-based upload for local files anymore. Supported local QQ rich media are images, `.mp4` video, and `.silk` voice. Multi-bot example: ```json { "channels": { "qq": { "enabled": true, "instances": [ { "name": "bot-a", "appId": "YOUR_APP_ID_A", "secret": "YOUR_APP_SECRET_A", "allowFrom": ["YOUR_OPENID"] }, { "name": "bot-b", "appId": "YOUR_APP_ID_B", "secret": "YOUR_APP_SECRET_B", "allowFrom": ["*"] } ] } } } ``` **4. Run** ```bash nanobot gateway ``` Now send a message to the bot from QQ โ€” it should respond! Outbound QQ media sends remote `http(s)` images through the QQ rich-media `url` flow directly. For local image files, nanobot always tries `file_data` upload first. When `mediaBaseUrl` is configured, nanobot also maps the same local file onto that public URL and can fall back to the existing URL-only rich-media flow if direct upload fails. Without `mediaBaseUrl`, nanobot still attempts direct upload, but there is no URL fallback path. Tools and skills should write deliverable files under `workspace/out`; QQ accepts only local image files from that directory. When an agent uses shell/browser tools to create screenshots or other temporary files for delivery, it should write them under `workspace/out` instead of the workspace root so channel publishing rules can apply consistently.
DingTalk (้’‰้’‰) Uses **Stream Mode** โ€” no public IP required. **1. Create a DingTalk bot** - Visit [DingTalk Open Platform](https://open-dev.dingtalk.com/) - Create a new app -> Add **Robot** capability - **Configuration**: - Toggle **Stream Mode** ON - **Permissions**: Add necessary permissions for sending messages - Get **AppKey** (Client ID) and **AppSecret** (Client Secret) from "Credentials" - Publish the app **2. Configure** ```json { "channels": { "dingtalk": { "enabled": true, "clientId": "YOUR_APP_KEY", "clientSecret": "YOUR_APP_SECRET", "allowFrom": ["YOUR_STAFF_ID"] } } } ``` > `allowFrom`: Add your staff ID. Use `["*"]` to allow all users. **3. Run** ```bash nanobot gateway ```
Slack Uses **Socket Mode** โ€” no public URL required. **1. Create a Slack app** - Go to [Slack API](https://api.slack.com/apps) โ†’ **Create New App** โ†’ "From scratch" - Pick a name and select your workspace **2. Configure the app** - **Socket Mode**: Toggle ON โ†’ Generate an **App-Level Token** with `connections:write` scope โ†’ copy it (`xapp-...`) - **OAuth & Permissions**: Add bot scopes: `chat:write`, `reactions:write`, `app_mentions:read` - **Event Subscriptions**: Toggle ON โ†’ Subscribe to bot events: `message.im`, `message.channels`, `app_mention` โ†’ Save Changes - **App Home**: Scroll to **Show Tabs** โ†’ Enable **Messages Tab** โ†’ Check **"Allow users to send Slash commands and messages from the messages tab"** - **Install App**: Click **Install to Workspace** โ†’ Authorize โ†’ copy the **Bot Token** (`xoxb-...`) **3. Configure nanobot** ```json { "channels": { "slack": { "enabled": true, "botToken": "xoxb-...", "appToken": "xapp-...", "allowFrom": ["YOUR_SLACK_USER_ID"], "groupPolicy": "mention" } } } ``` **4. Run** ```bash nanobot gateway ``` DM the bot directly or @mention it in a channel โ€” it should respond! > [!TIP] > - `groupPolicy`: `"mention"` (default โ€” respond only when @mentioned), `"open"` (respond to all channel messages), or `"allowlist"` (restrict to specific channels). > - DM policy defaults to open. Set `"dm": {"enabled": false}` to disable DMs.
Email Give nanobot its own email account. It polls **IMAP** for incoming mail and replies via **SMTP** โ€” like a personal email assistant. **1. Get credentials (Gmail example)** - Create a dedicated Gmail account for your bot (e.g. `my-nanobot@gmail.com`) - Enable 2-Step Verification โ†’ Create an [App Password](https://myaccount.google.com/apppasswords) - Use this app password for both IMAP and SMTP **2. Configure** > - `consentGranted` must be `true` to allow mailbox access. This is a safety gate โ€” set `false` to fully disable. > - `allowFrom`: Add your email address. Use `["*"]` to accept emails from anyone. > - `smtpUseTls` and `smtpUseSsl` default to `true` / `false` respectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly. > - Set `"autoReplyEnabled": false` if you only want to read/analyze emails without sending automatic replies. ```json { "channels": { "email": { "enabled": true, "consentGranted": true, "imapHost": "imap.gmail.com", "imapPort": 993, "imapUsername": "my-nanobot@gmail.com", "imapPassword": "your-app-password", "smtpHost": "smtp.gmail.com", "smtpPort": 587, "smtpUsername": "my-nanobot@gmail.com", "smtpPassword": "your-app-password", "fromAddress": "my-nanobot@gmail.com", "allowFrom": ["your-real-email@gmail.com"] } } } ``` **3. Run** ```bash nanobot gateway ```
WeChat (ๅพฎไฟก / Weixin) Uses **HTTP long-poll** with QR-code login via the ilinkai personal WeChat API. No local WeChat desktop client is required. > Weixin support is available from source checkout, but is not included in the current PyPI release yet. **1. Install from source** ```bash git clone https://github.com/HKUDS/nanobot.git cd nanobot pip install -e ".[weixin]" ``` **2. Configure** ```json { "channels": { "weixin": { "enabled": true, "allowFrom": ["YOUR_WECHAT_USER_ID"] } } } ``` > - `allowFrom`: Add the sender ID you see in nanobot logs for your WeChat account. Use `["*"]` to allow all users. > - `token`: Optional. If omitted, log in interactively and nanobot will save the token for you. > - `stateDir`: Optional. Defaults to nanobot's runtime directory for Weixin state. > - `pollTimeout`: Optional long-poll timeout in seconds. **3. Login** ```bash nanobot channels login weixin ``` Use `--force` to re-authenticate and ignore any saved token: ```bash nanobot channels login weixin --force ``` **4. Run** ```bash nanobot gateway ```
Wecom (ไผไธšๅพฎไฟก) > Here we use [wecom-aibot-sdk-python](https://github.com/chengyongru/wecom_aibot_sdk) (community Python version of the official [@wecom/aibot-node-sdk](https://www.npmjs.com/package/@wecom/aibot-node-sdk)). > > Uses **WebSocket** long connection โ€” no public IP required. **1. Install the optional dependency** ```bash pip install nanobot-ai[wecom] ``` **2. Create a WeCom AI Bot** Go to the WeCom admin console โ†’ Intelligent Robot โ†’ Create Robot โ†’ select **API mode** with **long connection**. Copy the Bot ID and Secret. **3. Configure** ```json { "channels": { "wecom": { "enabled": true, "botId": "your_bot_id", "secret": "your_bot_secret", "allowFrom": ["your_id"] } } } ``` **4. Run** ```bash nanobot gateway ```
## ๐ŸŒ Agent Social Network ๐Ÿˆ nanobot is capable of linking to the agent social network (agent community). **Just send one message and your nanobot joins automatically!** | Platform | How to Join (send this message to your bot) | |----------|-------------| | [**Moltbook**](https://www.moltbook.com/) | `Read https://moltbook.com/skill.md and follow the instructions to join Moltbook` | | [**ClawdChat**](https://clawdchat.ai/) | `Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat` | Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest. ## โš™๏ธ Configuration Config file: `~/.nanobot/config.json` ### Providers > [!TIP] > - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed. > - **MiniMax Coding Plan**: Exclusive discount links for the nanobot community: [Overseas](https://platform.minimax.io/subscribe/coding-plan?code=9txpdXw04g&source=link) ยท [Mainland China](https://platform.minimaxi.com/subscribe/token-plan?code=GILTJpMTqZ&source=link) > - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `"apiBase": "https://api.minimaxi.com/v1"` in your minimax provider config. > - **VolcEngine / BytePlus Coding Plan**: Use dedicated providers `volcengineCodingPlan` or `byteplusCodingPlan` instead of the pay-per-use `volcengine` / `byteplus` providers. > - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"` in your zhipu provider config. > - **Alibaba Cloud Coding Plan**: If you're on the Alibaba Cloud Coding Plan (BaiLian), set `"apiBase": "https://coding.dashscope.aliyuncs.com/v1"` in your dashscope provider config. > - **Alibaba Cloud BaiLian**: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set `"apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"` in your dashscope provider config. | Provider | Purpose | Get API Key | |----------|---------|-------------| | `custom` | Any OpenAI-compatible endpoint (direct, no LiteLLM) | โ€” | | `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) | | `volcengine` | LLM (VolcEngine, pay-per-use) | [Coding Plan](https://www.volcengine.com/activity/codingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) ยท [volcengine.com](https://www.volcengine.com) | | `byteplus` | LLM (VolcEngine international, pay-per-use) | [Coding Plan](https://www.byteplus.com/en/activity/codingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) ยท [byteplus.com](https://www.byteplus.com) | | `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) | | `azure_openai` | LLM (Azure OpenAI) | [portal.azure.com](https://portal.azure.com) | | `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) | | `deepseek` | LLM (DeepSeek direct) | [platform.deepseek.com](https://platform.deepseek.com) | | `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https://console.groq.com) | | `minimax` | LLM (MiniMax direct) | [platform.minimaxi.com](https://platform.minimaxi.com) | | `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) | | `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) | | `siliconflow` | LLM (SiliconFlow/็ก…ๅŸบๆตๅŠจ) | [siliconflow.cn](https://siliconflow.cn) | | `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) | | `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) | | `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) | | `ollama` | LLM (local, Ollama) | โ€” | | `mistral` | LLM | [docs.mistral.ai](https://docs.mistral.ai/) | | `ovms` | LLM (local, OpenVINO Model Server) | [docs.openvino.ai](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html) | | `vllm` | LLM (local, any OpenAI-compatible server) | โ€” | | `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` | | `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |
OpenAI Codex (OAuth) Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account. No `providers.openaiCodex` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config. **1. Login:** ```bash nanobot provider login openai-codex ``` **2. Set model** (merge into `~/.nanobot/config.json`): ```json { "agents": { "defaults": { "model": "openai-codex/gpt-5.1-codex" } } } ``` **3. Chat:** ```bash nanobot agent -m "Hello!" # Target a specific workspace/config locally nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!" # One-off workspace override on top of that config nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!" ``` > Docker users: use `docker run -it` for interactive OAuth login.
GitHub Copilot (OAuth) GitHub Copilot uses OAuth instead of API keys. Requires a [GitHub account with a plan](https://github.com/features/copilot/plans) configured. No `providers.githubCopilot` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config. **1. Login:** ```bash nanobot provider login github-copilot ``` **2. Set model** (merge into `~/.nanobot/config.json`): ```json { "agents": { "defaults": { "model": "github-copilot/gpt-4.1" } } } ``` **3. Chat:** ```bash nanobot agent -m "Hello!" # Target a specific workspace/config locally nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!" # One-off workspace override on top of that config nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!" ``` > Docker users: use `docker run -it` for interactive OAuth login.
Custom Provider (Any OpenAI-compatible API) Connects directly to any OpenAI-compatible endpoint โ€” LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is. ```json { "providers": { "custom": { "apiKey": "your-api-key", "apiBase": "https://api.your-provider.com/v1" } }, "agents": { "defaults": { "model": "your-model-name" } } } ``` > For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `"no-key"`).
Ollama (local) Run a local model with Ollama, then add to config: **1. Start Ollama** (example): ```bash ollama run llama3.2 ``` **2. Add to config** (partial โ€” merge into `~/.nanobot/config.json`): ```json { "providers": { "ollama": { "apiBase": "http://localhost:11434" } }, "agents": { "defaults": { "provider": "ollama", "model": "llama3.2" } } } ``` > `provider: "auto"` also works when `providers.ollama.apiBase` is configured, but setting `"provider": "ollama"` is the clearest option.
OpenVINO Model Server (local / OpenAI-compatible) Run LLMs locally on Intel GPUs using [OpenVINO Model Server](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html). OVMS exposes an OpenAI-compatible API at `/v3`. > Requires Docker and an Intel GPU with driver access (`/dev/dri`). **1. Pull the model** (example): ```bash mkdir -p ov/models && cd ov docker run -d \ --rm \ --user $(id -u):$(id -g) \ -v $(pwd)/models:/models \ openvino/model_server:latest-gpu \ --pull \ --model_name openai/gpt-oss-20b \ --model_repository_path /models \ --source_model OpenVINO/gpt-oss-20b-int4-ov \ --task text_generation \ --tool_parser gptoss \ --reasoning_parser gptoss \ --enable_prefix_caching true \ --target_device GPU ``` > This downloads the model weights. Wait for the container to finish before proceeding. **2. Start the server** (example): ```bash docker run -d \ --rm \ --name ovms \ --user $(id -u):$(id -g) \ -p 8000:8000 \ -v $(pwd)/models:/models \ --device /dev/dri \ --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \ openvino/model_server:latest-gpu \ --rest_port 8000 \ --model_name openai/gpt-oss-20b \ --model_repository_path /models \ --source_model OpenVINO/gpt-oss-20b-int4-ov \ --task text_generation \ --tool_parser gptoss \ --reasoning_parser gptoss \ --enable_prefix_caching true \ --target_device GPU ``` **3. Add to config** (partial โ€” merge into `~/.nanobot/config.json`): ```json { "providers": { "ovms": { "apiBase": "http://localhost:8000/v3" } }, "agents": { "defaults": { "provider": "ovms", "model": "openai/gpt-oss-20b" } } } ``` > OVMS is a local server โ€” no API key required. Supports tool calling (`--tool_parser gptoss`), reasoning (`--reasoning_parser gptoss`), and streaming. > See the [official OVMS docs](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html) for more details.
vLLM (local / OpenAI-compatible) Run your own model with vLLM or any OpenAI-compatible server, then add to config: **1. Start the server** (example): ```bash vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000 ``` **2. Add to config** (partial โ€” merge into `~/.nanobot/config.json`): *Provider (key can be any non-empty string for local):* ```json { "providers": { "vllm": { "apiKey": "dummy", "apiBase": "http://localhost:8000/v1" } } } ``` *Model:* ```json { "agents": { "defaults": { "model": "meta-llama/Llama-3.1-8B-Instruct" } } } ```
Adding a New Provider (Developer Guide) nanobot uses a **Provider Registry** (`nanobot/providers/registry.py`) as the single source of truth. Adding a new provider only takes **2 steps** โ€” no if-elif chains to touch. **Step 1.** Add a `ProviderSpec` entry to `PROVIDERS` in `nanobot/providers/registry.py`: ```python ProviderSpec( name="myprovider", # config field name keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching env_key="MYPROVIDER_API_KEY", # env var for LiteLLM display_name="My Provider", # shown in `nanobot status` litellm_prefix="myprovider", # auto-prefix: model โ†’ myprovider/model skip_prefixes=("myprovider/",), # don't double-prefix ) ``` **Step 2.** Add a field to `ProvidersConfig` in `nanobot/config/schema.py`: ```python class ProvidersConfig(BaseModel): ... myprovider: ProviderConfig = ProviderConfig() ``` That's it! Environment variables, model prefixing, config matching, and `nanobot status` display will all work automatically. **Common `ProviderSpec` options:** | Field | Description | Example | |-------|-------------|---------| | `litellm_prefix` | Auto-prefix model names for LiteLLM | `"dashscope"` โ†’ `dashscope/qwen-max` | | `skip_prefixes` | Don't prefix if model already starts with these | `("dashscope/", "openrouter/")` | | `env_extras` | Additional env vars to set | `(("ZHIPUAI_API_KEY", "{api_key}"),)` | | `model_overrides` | Per-model parameter overrides | `(("kimi-k2.5", {"temperature": 1.0}),)` | | `is_gateway` | Can route any model (like OpenRouter) | `True` | | `detect_by_key_prefix` | Detect gateway by API key prefix | `"sk-or-"` | | `detect_by_base_keyword` | Detect gateway by API base URL | `"openrouter"` | | `strip_model_prefix` | Strip existing prefix before re-prefixing | `True` (for AiHubMix) |
### MCP (Model Context Protocol) > [!TIP] > The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README. nanobot supports [MCP](https://modelcontextprotocol.io/) โ€” connect external tool servers and use them as native agent tools. Add MCP servers to your `config.json`: ```json { "tools": { "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"] }, "my-remote-mcp": { "url": "https://example.com/mcp/", "headers": { "Authorization": "Bearer xxxxx" } } } } } ``` Two transport modes are supported: | Mode | Config | Example | |------|--------|---------| | **Stdio** | `command` + `args` | Local process via `npx` / `uvx` | | **HTTP** | `url` + `headers` (optional) | Remote endpoint (`https://mcp.example.com/sse`) | Use `toolTimeout` to override the default 30s per-call timeout for slow servers: ```json { "tools": { "mcpServers": { "my-slow-server": { "url": "https://example.com/mcp/", "toolTimeout": 120 } } } } ``` MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools โ€” no extra configuration needed. nanobot hot-reloads agent runtime config from the active `config.json` on the next message, including `tools.mcpServers`, `tools.web.*`, `tools.exec.*`, `tools.restrictToWorkspace`, `agents.defaults.model`, `agents.defaults.maxToolIterations`, `agents.defaults.contextWindowTokens`, `agents.defaults.maxTokens`, `agents.defaults.temperature`, `agents.defaults.reasoningEffort`, `channels.sendProgress`, `channels.sendToolHints`, and `channels.voiceReply.*`. Channel connection settings and provider credentials still require a restart. ### Security > [!TIP] > For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent. > In `v0.1.4.post3` and earlier, an empty `allowFrom` allowed all senders. Since `v0.1.4.post4`, empty `allowFrom` denies all access by default. To allow all senders, set `"allowFrom": ["*"]`. | Option | Default | Description | |--------|---------|-------------| | `tools.restrictToWorkspace` | `false` | When `true`, restricts **all** agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. | | `tools.exec.enable` | `true` | When `false`, the shell `exec` tool is not registered at all. Use this to completely disable shell command execution. | | `tools.exec.pathAppend` | `""` | Extra directories to append to `PATH` when running shell commands (e.g. `/usr/sbin` for `ufw`). | | `channels.*.allowFrom` | `[]` (deny all) | Whitelist of user IDs. Empty denies all; use `["*"]` to allow everyone. | ## ๐Ÿงฉ Multiple Instances Run multiple nanobot instances simultaneously with separate configs and runtime data. Use `--config` as the main entrypoint. Optionally pass `--workspace` during `onboard` when you want to initialize or update the saved workspace for a specific instance. ### Quick Start If you want each instance to have its own dedicated workspace from the start, pass both `--config` and `--workspace` during onboarding. **Initialize instances:** ```bash # Create separate instance configs and workspaces nanobot onboard --config ~/.nanobot-telegram/config.json --workspace ~/.nanobot-telegram/workspace nanobot onboard --config ~/.nanobot-discord/config.json --workspace ~/.nanobot-discord/workspace nanobot onboard --config ~/.nanobot-feishu/config.json --workspace ~/.nanobot-feishu/workspace ``` **Configure each instance:** Edit `~/.nanobot-telegram/config.json`, `~/.nanobot-discord/config.json`, etc. with different channel settings. The workspace you passed during `onboard` is saved into each config as that instance's default workspace. **Run instances:** ```bash # Instance A - Telegram bot nanobot gateway --config ~/.nanobot-telegram/config.json # Instance B - Discord bot nanobot gateway --config ~/.nanobot-discord/config.json # Instance C - Feishu bot with custom port nanobot gateway --config ~/.nanobot-feishu/config.json --port 18792 ``` ### Path Resolution When using `--config`, nanobot derives its runtime data directory from the config file location. The workspace still comes from `agents.defaults.workspace` unless you override it with `--workspace`. To open a CLI session against one of these instances locally: ```bash nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello from Telegram instance" nanobot agent -c ~/.nanobot-discord/config.json -m "Hello from Discord instance" # Optional one-off workspace override nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test ``` > `nanobot agent` starts a local CLI agent using the selected workspace/config. It does not attach to or proxy through an already running `nanobot gateway` process. | Component | Resolved From | Example | |-----------|---------------|---------| | **Config** | `--config` path | `~/.nanobot-A/config.json` | | **Workspace** | `--workspace` or config | `~/.nanobot-A/workspace/` | | **Cron Jobs** | config directory | `~/.nanobot-A/cron/` | | **Media / runtime state** | config directory | `~/.nanobot-A/media/` | ### How It Works - `--config` selects which config file to load - By default, the workspace comes from `agents.defaults.workspace` in that config - If you pass `--workspace`, it overrides the workspace from the config file ### Minimal Setup 1. Copy your base config into a new instance directory. 2. Set a different `agents.defaults.workspace` for that instance. 3. Start the instance with `--config`. Example config: ```json { "agents": { "defaults": { "workspace": "~/.nanobot-telegram/workspace", "model": "anthropic/claude-sonnet-4-6" } }, "channels": { "telegram": { "enabled": true, "token": "YOUR_TELEGRAM_BOT_TOKEN" } }, "gateway": { "port": 18790 } } ``` Start separate instances: ```bash nanobot gateway --config ~/.nanobot-telegram/config.json nanobot gateway --config ~/.nanobot-discord/config.json ``` Override workspace for one-off runs when needed: ```bash nanobot gateway --config ~/.nanobot-telegram/config.json --workspace /tmp/nanobot-telegram-test ``` ### Common Use Cases - Run separate bots for Telegram, Discord, Feishu, and other platforms - Keep testing and production instances isolated - Use different models or providers for different teams - Serve multiple tenants with separate configs and runtime data ### Notes - nanobot does not expose local files itself. If you rely on local media delivery such as QQ screenshots, serve the relevant delivery-artifact directory with your own HTTP server and point `mediaBaseUrl` at it. - Each instance must use a different port if they run at the same time - Use a different workspace per instance if you want isolated memory, sessions, and skills - `--workspace` overrides the workspace defined in the config file - Cron jobs and runtime media/state are derived from the config directory ## ๐Ÿ’ป CLI Reference | Command | Description | |---------|-------------| | `nanobot onboard` | Initialize config & workspace at `~/.nanobot/` | | `nanobot onboard --wizard` | Launch the interactive onboarding wizard | | `nanobot onboard -c -w ` | Initialize or refresh a specific instance config and workspace | | `nanobot agent -m "..."` | Chat with the agent | | `nanobot agent -w ` | Chat against a specific workspace | | `nanobot agent -w -c ` | Chat against a specific workspace/config | | `nanobot agent` | Interactive chat mode | | `nanobot agent --no-markdown` | Show plain-text replies | | `nanobot agent --logs` | Show runtime logs during chat | | `nanobot gateway` | Start the gateway | | `nanobot status` | Show status | | `nanobot provider login openai-codex` | OAuth login for providers | | `nanobot channels login ` | Authenticate a channel interactively | | `nanobot channels status` | Show channel status | Interactive mode exits: `exit`, `quit`, `/exit`, `/quit`, `:q`, or `Ctrl+D`. ### Chat Slash Commands These commands are available inside chats handled by `nanobot agent` or `nanobot gateway`: | Command | Description | |---------|-------------| | `/new` | Start a new conversation | | `/lang current` | Show the active command language | | `/lang list` | List available command languages | | `/lang set ` | Switch command language | | `/persona current` | Show the active persona | | `/persona list` | List available personas | | `/persona set ` | Switch persona and start a new session | | `/skill search ` | Search public skills on ClawHub | | `/skill install ` | Install a ClawHub skill into the active workspace | | `/skill uninstall ` | Remove a locally installed workspace skill from the active workspace | | `/skill list` | List ClawHub-managed skills in the active workspace | | `/skill update` | Update all ClawHub-managed skills in the active workspace | | `/mcp [list]` | List configured MCP servers and registered MCP tools | | `/stop` | Stop the current task | | `/restart` | Restart the bot process | | `/status` | Show runtime status, token usage, and session context estimate | | `/help` | Show command help | `/skill` uses the active workspace for the current process, not a hard-coded `~/.nanobot/workspace` path. If you start nanobot with `--workspace`, skill install/uninstall/list/update operate on that workspace's `skills/` directory. `/skill search` queries the live ClawHub registry API directly at `https://lightmake.site/api/skills` using the same sort order as the SkillHub web UI, so search does not depend on `npm` or `npx`. For `install`, `list`, and `update`, nanobot still shells out to `npx clawhub@latest` using ClawHub global options first: `--workdir --no-input ...`. `/skill uninstall` removes the local `/skills/` directory directly and best-effort prunes `/.clawhub/lock.json`, because current ClawHub docs do not document an uninstall subcommand. `/skill search` can legitimately return no matches. In that case nanobot now replies with a clear "no skills found" message instead of leaving the channel on a transient searching state. If the ClawHub registry API or `npx clawhub@latest` cannot be reached, nanobot also surfaces the underlying network or HTTP error directly so the failure is visible to the user.
Heartbeat (Periodic Tasks) The gateway wakes up every 30 minutes and checks `HEARTBEAT.md` in your workspace (`~/.nanobot/workspace/HEARTBEAT.md`). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel. **Setup:** edit `~/.nanobot/workspace/HEARTBEAT.md` (created automatically by `nanobot onboard`): ```markdown ## Periodic Tasks - [ ] Check weather forecast and send a summary - [ ] Scan inbox for urgent emails ``` The agent can also manage this file itself โ€” ask it to "add a periodic task" and it will update `HEARTBEAT.md` for you. > **Note:** The gateway must be running (`nanobot gateway`) and you must have chatted with the bot at least once so it knows which channel to deliver to.
## ๐Ÿณ Docker > [!TIP] > The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts. ### Docker Compose ```bash docker compose run --rm nanobot-cli onboard # first-time setup vim ~/.nanobot/config.json # add API keys docker compose up -d nanobot-gateway # start gateway ``` ```bash docker compose run --rm nanobot-cli agent -m "Hello!" # run CLI docker compose logs -f nanobot-gateway # view logs docker compose down # stop ``` ### Docker ```bash # Build the image docker build -t nanobot . # Initialize config (first time only) docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard # Edit config on host to add API keys vim ~/.nanobot/config.json # Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat) docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway # Or run a single command docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!" docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status ``` ## ๐Ÿง Linux Service Run the gateway as a systemd user service so it starts automatically and restarts on failure. **1. Find the nanobot binary path:** ```bash which nanobot # e.g. /home/user/.local/bin/nanobot ``` **2. Create the service file** at `~/.config/systemd/user/nanobot-gateway.service` (replace `ExecStart` path if needed): ```ini [Unit] Description=Nanobot Gateway After=network.target [Service] Type=simple ExecStart=%h/.local/bin/nanobot gateway Restart=always RestartSec=10 NoNewPrivileges=yes ProtectSystem=strict ReadWritePaths=%h [Install] WantedBy=default.target ``` **3. Enable and start:** ```bash systemctl --user daemon-reload systemctl --user enable --now nanobot-gateway ``` **Common operations:** ```bash systemctl --user status nanobot-gateway # check status systemctl --user restart nanobot-gateway # restart after config changes journalctl --user -u nanobot-gateway -f # follow logs ``` If you edit the `.service` file itself, run `systemctl --user daemon-reload` before restarting. > **Note:** User services only run while you are logged in. To keep the gateway running after logout, enable lingering: > > ```bash > loginctl enable-linger $USER > ``` ## ๐Ÿ“ Project Structure ``` nanobot/ โ”œโ”€โ”€ agent/ # ๐Ÿง  Core agent logic โ”‚ โ”œโ”€โ”€ loop.py # Agent loop (LLM โ†” tool execution) โ”‚ โ”œโ”€โ”€ context.py # Prompt builder โ”‚ โ”œโ”€โ”€ memory.py # Persistent memory โ”‚ โ”œโ”€โ”€ skills.py # Skills loader โ”‚ โ”œโ”€โ”€ subagent.py # Background task execution โ”‚ โ””โ”€โ”€ tools/ # Built-in tools (incl. spawn) โ”œโ”€โ”€ skills/ # ๐ŸŽฏ Bundled skills (github, weather, tmux...) โ”œโ”€โ”€ channels/ # ๐Ÿ“ฑ Chat channel integrations โ”œโ”€โ”€ bus/ # ๐ŸšŒ Message routing โ”œโ”€โ”€ cron/ # โฐ Scheduled tasks โ”œโ”€โ”€ heartbeat/ # ๐Ÿ’“ Proactive wake-up โ”œโ”€โ”€ providers/ # ๐Ÿค– LLM providers (OpenRouter, etc.) โ”œโ”€โ”€ session/ # ๐Ÿ’ฌ Conversation sessions โ”œโ”€โ”€ config/ # โš™๏ธ Configuration โ””โ”€โ”€ cli/ # ๐Ÿ–ฅ๏ธ Commands ``` ## ๐Ÿค Contribute & Roadmap PRs welcome! The codebase is intentionally small and readable. ๐Ÿค— ### Branching Strategy | Branch | Purpose | |--------|---------| | `main` | Stable releases โ€” bug fixes and minor improvements | | `nightly` | Experimental features โ€” new features and breaking changes | **Unsure which branch to target?** See [CONTRIBUTING.md](./CONTRIBUTING.md) for details. **Roadmap** โ€” Pick an item and [open a PR](https://github.com/HKUDS/nanobot/pulls)! - [ ] **Multi-modal** โ€” See and hear (images, voice, video) - [ ] **Long-term memory** โ€” Never forget important context - [ ] **Better reasoning** โ€” Multi-step planning and reflection - [ ] **More integrations** โ€” Calendar and more - [ ] **Self-improvement** โ€” Learn from feedback and mistakes ### Contributors Contributors ## โญ Star History

Thanks for visiting โœจ nanobot!

Views

nanobot is for educational, research, and technical exchange purposes only