diff --git a/README.md b/README.md
index e73beb5..c1b7e46 100644
--- a/README.md
+++ b/README.md
@@ -16,12 +16,14 @@
β‘οΈ Delivers core agent functionality in just **~4,000** lines of code β **99% smaller** than Clawdbot's 430k+ lines.
-π Real-time line count: **3,536 lines** (run `bash core_agent_lines.sh` to verify anytime)
+π Real-time line count: **3,663 lines** (run `bash core_agent_lines.sh` to verify anytime)
## π’ News
+- **2026-02-14** π nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.
- **2026-02-13** π Released v0.1.3.post7 β includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
- **2026-02-12** π§ Redesigned memory system β Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
+- **2026-02-11** β¨ Enhanced CLI experience and added MiniMax support!
- **2026-02-10** π Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
- **2026-02-09** π¬ Added Slack, Email, and QQ support β nanobot now supports multiple chat platforms!
- **2026-02-08** π§ Refactored Providersβadding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
@@ -107,14 +109,22 @@ nanobot onboard
**2. Configure** (`~/.nanobot/config.json`)
-For OpenRouter - recommended for global users:
+Add or merge these **two parts** into your config (other options have defaults).
+
+*Set your API key* (e.g. OpenRouter, recommended for global users):
```json
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
- },
+ }
+}
+```
+
+*Set your model*:
+```json
+{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
@@ -126,48 +136,11 @@ For OpenRouter - recommended for global users:
**3. Chat**
```bash
-nanobot agent -m "What is 2+2?"
+nanobot agent
```
That's it! You have a working AI assistant in 2 minutes.
-## π₯οΈ Local Models (vLLM)
-
-Run nanobot with your own local models using vLLM or any OpenAI-compatible server.
-
-**1. Start your vLLM server**
-
-```bash
-vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
-```
-
-**2. Configure** (`~/.nanobot/config.json`)
-
-```json
-{
- "providers": {
- "vllm": {
- "apiKey": "dummy",
- "apiBase": "http://localhost:8000/v1"
- }
- },
- "agents": {
- "defaults": {
- "model": "meta-llama/Llama-3.1-8B-Instruct"
- }
- }
-}
-```
-
-**3. Chat**
-
-```bash
-nanobot agent -m "Hello from my local LLM!"
-```
-
-> [!TIP]
-> The `apiKey` can be any non-empty string for local servers that don't require authentication.
-
## π¬ Chat Apps
Talk to your nanobot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ β anytime, anywhere.
@@ -599,6 +572,7 @@ Config file: `~/.nanobot/config.json`
| Provider | Purpose | Get API Key |
|----------|---------|-------------|
+| `custom` | Any OpenAI-compatible endpoint | β |
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
@@ -612,6 +586,68 @@ Config file: `~/.nanobot/config.json`
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
| `vllm` | LLM (local, any OpenAI-compatible server) | β |
+
+Custom Provider (Any OpenAI-compatible API)
+
+If your provider is not listed above but exposes an **OpenAI-compatible API** (e.g. Together AI, Fireworks, Azure OpenAI, self-hosted endpoints), use the `custom` provider:
+
+```json
+{
+ "providers": {
+ "custom": {
+ "apiKey": "your-api-key",
+ "apiBase": "https://api.your-provider.com/v1"
+ }
+ },
+ "agents": {
+ "defaults": {
+ "model": "your-model-name"
+ }
+ }
+}
+```
+
+> The `custom` provider routes through LiteLLM's OpenAI-compatible path. It works with any endpoint that follows the OpenAI chat completions API format. The model name is passed directly to the endpoint without any prefix.
+
+
+
+
+vLLM (local / OpenAI-compatible)
+
+Run your own model with vLLM or any OpenAI-compatible server, then add to config:
+
+**1. Start the server** (example):
+```bash
+vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
+```
+
+**2. Add to config** (partial β merge into `~/.nanobot/config.json`):
+
+*Provider (key can be any non-empty string for local):*
+```json
+{
+ "providers": {
+ "vllm": {
+ "apiKey": "dummy",
+ "apiBase": "http://localhost:8000/v1"
+ }
+ }
+}
+```
+
+*Model:*
+```json
+{
+ "agents": {
+ "defaults": {
+ "model": "meta-llama/Llama-3.1-8B-Instruct"
+ }
+ }
+}
+```
+
+
+
Adding a New Provider (Developer Guide)
@@ -657,8 +693,43 @@ That's it! Environment variables, model prefixing, config matching, and `nanobot
+### MCP (Model Context Protocol)
+
+> [!TIP]
+> The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
+
+nanobot supports [MCP](https://modelcontextprotocol.io/) β connect external tool servers and use them as native agent tools.
+
+Add MCP servers to your `config.json`:
+
+```json
+{
+ "tools": {
+ "mcpServers": {
+ "filesystem": {
+ "command": "npx",
+ "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
+ }
+ }
+ }
+}
+```
+
+Two transport modes are supported:
+
+| Mode | Config | Example |
+|------|--------|---------|
+| **Stdio** | `command` + `args` | Local process via `npx` / `uvx` |
+| **HTTP** | `url` | Remote endpoint (`https://mcp.example.com/sse`) |
+
+MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools β no extra configuration needed.
+
+
+
+
### Security
+> [!TIP]
> For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent.
| Option | Default | Description |
@@ -753,7 +824,6 @@ PRs welcome! The codebase is intentionally small and readable. π€
**Roadmap** β Pick an item and [open a PR](https://github.com/HKUDS/nanobot/pulls)!
-- [x] **Voice Transcription** β Support for Groq Whisper (Issue #13)
- [ ] **Multi-modal** β See and hear (images, voice, video)
- [ ] **Long-term memory** β Never forget important context
- [ ] **Better reasoning** β Multi-step planning and reflection
diff --git a/nanobot/agent/loop.py b/nanobot/agent/loop.py
index e28166f..2d3ed94 100644
--- a/nanobot/agent/loop.py
+++ b/nanobot/agent/loop.py
@@ -1,7 +1,9 @@
"""Agent loop: the core processing engine."""
import asyncio
+from contextlib import AsyncExitStack
import json
+import json_repair
from pathlib import Path
from typing import Any
@@ -50,6 +52,7 @@ class AgentLoop:
cron_service: "CronService | None" = None,
restrict_to_workspace: bool = False,
session_manager: SessionManager | None = None,
+ mcp_servers: dict | None = None,
):
from nanobot.config.schema import ExecToolConfig
from nanobot.cron.service import CronService
@@ -82,6 +85,9 @@ class AgentLoop:
)
self._running = False
+ self._mcp_servers = mcp_servers or {}
+ self._mcp_stack: AsyncExitStack | None = None
+ self._mcp_connected = False
self._register_default_tools()
def _register_default_tools(self) -> None:
@@ -116,6 +122,16 @@ class AgentLoop:
if self.cron_service:
self.tools.register(CronTool(self.cron_service))
+ async def _connect_mcp(self) -> None:
+ """Connect to configured MCP servers (one-time, lazy)."""
+ if self._mcp_connected or not self._mcp_servers:
+ return
+ self._mcp_connected = True
+ from nanobot.agent.tools.mcp import connect_mcp_servers
+ self._mcp_stack = AsyncExitStack()
+ await self._mcp_stack.__aenter__()
+ await connect_mcp_servers(self._mcp_servers, self.tools, self._mcp_stack)
+
def _set_tool_context(self, channel: str, chat_id: str) -> None:
"""Update context for all tools that need routing info."""
if message_tool := self.tools.get("message"):
@@ -191,6 +207,7 @@ class AgentLoop:
async def run(self) -> None:
"""Run the agent loop, processing messages from the bus."""
self._running = True
+ await self._connect_mcp()
logger.info("Agent loop started")
while self._running:
@@ -213,6 +230,15 @@ class AgentLoop:
except asyncio.TimeoutError:
continue
+ async def close_mcp(self) -> None:
+ """Close MCP connections."""
+ if self._mcp_stack:
+ try:
+ await self._mcp_stack.aclose()
+ except (RuntimeError, BaseExceptionGroup):
+ pass # MCP SDK cancel scope cleanup is noisy but harmless
+ self._mcp_stack = None
+
def stop(self) -> None:
"""Stop the agent loop."""
self._running = False
@@ -403,9 +429,15 @@ Respond with ONLY valid JSON, no markdown fences."""
model=self.model,
)
text = (response.content or "").strip()
+ if not text:
+ logger.warning("Memory consolidation: LLM returned empty response, skipping")
+ return
if text.startswith("```"):
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
- result = json.loads(text)
+ result = json_repair.loads(text)
+ if not isinstance(result, dict):
+ logger.warning(f"Memory consolidation: unexpected response type, skipping. Response: {text[:200]}")
+ return
if entry := result.get("history_entry"):
# Defensive: ensure entry is a string (LLM may return dict)
@@ -446,6 +478,7 @@ Respond with ONLY valid JSON, no markdown fences."""
Returns:
The agent's response.
"""
+ await self._connect_mcp()
msg = InboundMessage(
channel=channel,
sender_id="user",
diff --git a/nanobot/agent/tools/mcp.py b/nanobot/agent/tools/mcp.py
new file mode 100644
index 0000000..1c8eac4
--- /dev/null
+++ b/nanobot/agent/tools/mcp.py
@@ -0,0 +1,80 @@
+"""MCP client: connects to MCP servers and wraps their tools as native nanobot tools."""
+
+from contextlib import AsyncExitStack
+from typing import Any
+
+from loguru import logger
+
+from nanobot.agent.tools.base import Tool
+from nanobot.agent.tools.registry import ToolRegistry
+
+
+class MCPToolWrapper(Tool):
+ """Wraps a single MCP server tool as a nanobot Tool."""
+
+ def __init__(self, session, server_name: str, tool_def):
+ self._session = session
+ self._original_name = tool_def.name
+ self._name = f"mcp_{server_name}_{tool_def.name}"
+ self._description = tool_def.description or tool_def.name
+ self._parameters = tool_def.inputSchema or {"type": "object", "properties": {}}
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+ @property
+ def description(self) -> str:
+ return self._description
+
+ @property
+ def parameters(self) -> dict[str, Any]:
+ return self._parameters
+
+ async def execute(self, **kwargs: Any) -> str:
+ from mcp import types
+ result = await self._session.call_tool(self._original_name, arguments=kwargs)
+ parts = []
+ for block in result.content:
+ if isinstance(block, types.TextContent):
+ parts.append(block.text)
+ else:
+ parts.append(str(block))
+ return "\n".join(parts) or "(no output)"
+
+
+async def connect_mcp_servers(
+ mcp_servers: dict, registry: ToolRegistry, stack: AsyncExitStack
+) -> None:
+ """Connect to configured MCP servers and register their tools."""
+ from mcp import ClientSession, StdioServerParameters
+ from mcp.client.stdio import stdio_client
+
+ for name, cfg in mcp_servers.items():
+ try:
+ if cfg.command:
+ params = StdioServerParameters(
+ command=cfg.command, args=cfg.args, env=cfg.env or None
+ )
+ read, write = await stack.enter_async_context(stdio_client(params))
+ elif cfg.url:
+ from mcp.client.streamable_http import streamable_http_client
+ read, write, _ = await stack.enter_async_context(
+ streamable_http_client(cfg.url)
+ )
+ else:
+ logger.warning(f"MCP server '{name}': no command or url configured, skipping")
+ continue
+
+ session = await stack.enter_async_context(ClientSession(read, write))
+ await session.initialize()
+
+ tools = await session.list_tools()
+ for tool_def in tools.tools:
+ wrapper = MCPToolWrapper(session, name, tool_def)
+ registry.register(wrapper)
+ logger.debug(f"MCP: registered tool '{wrapper.name}' from server '{name}'")
+
+ logger.info(f"MCP server '{name}': connected, {len(tools.tools)} tools registered")
+ except Exception as e:
+ logger.error(f"MCP server '{name}': failed to connect: {e}")
diff --git a/nanobot/cli/commands.py b/nanobot/cli/commands.py
index 17210ce..6a9c92f 100644
--- a/nanobot/cli/commands.py
+++ b/nanobot/cli/commands.py
@@ -346,6 +346,7 @@ def gateway(
cron_service=cron,
restrict_to_workspace=config.tools.restrict_to_workspace,
session_manager=session_manager,
+ mcp_servers=config.tools.mcp_servers,
)
# Set cron callback (needs agent)
@@ -403,6 +404,8 @@ def gateway(
)
except KeyboardInterrupt:
console.print("\nShutting down...")
+ finally:
+ await agent.close_mcp()
heartbeat.stop()
cron.stop()
agent.stop()
@@ -453,6 +456,7 @@ def agent(
brave_api_key=config.tools.web.search.api_key or None,
exec_config=config.tools.exec,
restrict_to_workspace=config.tools.restrict_to_workspace,
+ mcp_servers=config.tools.mcp_servers,
)
# Show spinner when logs are off (no output to miss); skip when logs are on
@@ -469,6 +473,7 @@ def agent(
with _thinking_ctx():
response = await agent_loop.process_direct(message, session_id)
_print_agent_response(response, render_markdown=markdown)
+ await agent_loop.close_mcp()
asyncio.run(run_once())
else:
@@ -484,30 +489,33 @@ def agent(
signal.signal(signal.SIGINT, _exit_on_sigint)
async def run_interactive():
- while True:
- try:
- _flush_pending_tty_input()
- user_input = await _read_interactive_input_async()
- command = user_input.strip()
- if not command:
- continue
+ try:
+ while True:
+ try:
+ _flush_pending_tty_input()
+ user_input = await _read_interactive_input_async()
+ command = user_input.strip()
+ if not command:
+ continue
- if _is_exit_command(command):
+ if _is_exit_command(command):
+ _restore_terminal()
+ console.print("\nGoodbye!")
+ break
+
+ with _thinking_ctx():
+ response = await agent_loop.process_direct(user_input, session_id)
+ _print_agent_response(response, render_markdown=markdown)
+ except KeyboardInterrupt:
_restore_terminal()
console.print("\nGoodbye!")
break
-
- with _thinking_ctx():
- response = await agent_loop.process_direct(user_input, session_id)
- _print_agent_response(response, render_markdown=markdown)
- except KeyboardInterrupt:
- _restore_terminal()
- console.print("\nGoodbye!")
- break
- except EOFError:
- _restore_terminal()
- console.print("\nGoodbye!")
- break
+ except EOFError:
+ _restore_terminal()
+ console.print("\nGoodbye!")
+ break
+ finally:
+ await agent_loop.close_mcp()
asyncio.run(run_interactive())
diff --git a/nanobot/config/schema.py b/nanobot/config/schema.py
index 60bbc69..0934aac 100644
--- a/nanobot/config/schema.py
+++ b/nanobot/config/schema.py
@@ -216,11 +216,20 @@ class ExecToolConfig(BaseModel):
timeout: int = 60
+class MCPServerConfig(BaseModel):
+ """MCP server connection configuration (stdio or HTTP)."""
+ command: str = "" # Stdio: command to run (e.g. "npx")
+ args: list[str] = Field(default_factory=list) # Stdio: command arguments
+ env: dict[str, str] = Field(default_factory=dict) # Stdio: extra env vars
+ url: str = "" # HTTP: streamable HTTP endpoint URL
+
+
class ToolsConfig(BaseModel):
"""Tools configuration."""
web: WebToolsConfig = Field(default_factory=WebToolsConfig)
exec: ExecToolConfig = Field(default_factory=ExecToolConfig)
restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory
+ mcp_servers: dict[str, MCPServerConfig] = Field(default_factory=dict)
class Config(BaseSettings):
diff --git a/nanobot/cron/service.py b/nanobot/cron/service.py
index d1965a9..4da845a 100644
--- a/nanobot/cron/service.py
+++ b/nanobot/cron/service.py
@@ -4,6 +4,7 @@ import asyncio
import json
import time
import uuid
+from datetime import datetime
from pathlib import Path
from typing import Any, Callable, Coroutine
@@ -30,9 +31,13 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
if schedule.kind == "cron" and schedule.expr:
try:
from croniter import croniter
- cron = croniter(schedule.expr, time.time())
- next_time = cron.get_next()
- return int(next_time * 1000)
+ from zoneinfo import ZoneInfo
+ base_time = time.time()
+ tz = ZoneInfo(schedule.tz) if schedule.tz else datetime.now().astimezone().tzinfo
+ base_dt = datetime.fromtimestamp(base_time, tz=tz)
+ cron = croniter(schedule.expr, base_dt)
+ next_dt = cron.get_next(datetime)
+ return int(next_dt.timestamp() * 1000)
except Exception:
return None
diff --git a/nanobot/providers/litellm_provider.py b/nanobot/providers/litellm_provider.py
index a39893b..ed4cf49 100644
--- a/nanobot/providers/litellm_provider.py
+++ b/nanobot/providers/litellm_provider.py
@@ -1,6 +1,7 @@
"""LiteLLM provider implementation for multi-provider support."""
import json
+import json_repair
import os
from typing import Any
@@ -173,10 +174,7 @@ class LiteLLMProvider(LLMProvider):
# Parse arguments from JSON string if needed
args = tc.function.arguments
if isinstance(args, str):
- try:
- args = json.loads(args)
- except json.JSONDecodeError:
- args = {"raw": args}
+ args = json_repair.loads(args)
tool_calls.append(ToolCallRequest(
id=tc.id,
diff --git a/pyproject.toml b/pyproject.toml
index 80e54c8..147e799 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -38,6 +38,8 @@ dependencies = [
"qq-botpy>=1.0.0",
"python-socks[asyncio]>=2.4.0",
"prompt-toolkit>=3.0.0",
+ "mcp>=1.0.0",
+ "json-repair>=0.30.0",
]
[project.optional-dependencies]