Merge branch 'main' into pr-644

This commit is contained in:
Re-bin
2026-02-20 08:08:55 +00:00
44 changed files with 1555 additions and 470 deletions

View File

@@ -16,24 +16,33 @@
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines. ⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
📏 Real-time line count: **3,663 lines** (run `bash core_agent_lines.sh` to verify anytime) 📏 Real-time line count: **3,761 lines** (run `bash core_agent_lines.sh` to verify anytime)
## 📢 News ## 📢 News
- **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details.
- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills.
- **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
- **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details. - **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.
- **2026-02-13** 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details. - **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it! - **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support! - **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!
- **2026-02-10** 🎉 Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431). - **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms! - **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers). - **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
- **2026-02-07** 🚀 Released v0.1.3.post5 with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details.
<details>
<summary>Earlier news</summary>
- **2026-02-07** 🚀 Released **v0.1.3.post5** with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details.
- **2026-02-06** ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening! - **2026-02-06** ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
- **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support! - **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
- **2026-02-04** 🚀 Released v0.1.3.post4 with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details. - **2026-02-04** 🚀 Released **v0.1.3.post4** with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details.
- **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling! - **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!
- **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot! - **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!
</details>
## Key Features of nanobot: ## Key Features of nanobot:
🪶 **Ultra-Lightweight**: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot. 🪶 **Ultra-Lightweight**: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot.
@@ -143,19 +152,19 @@ That's it! You have a working AI assistant in 2 minutes.
## 💬 Chat Apps ## 💬 Chat Apps
Talk to your nanobot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ — anytime, anywhere. Connect nanobot to your favorite chat platform.
| Channel | Setup | | Channel | What you need |
|---------|-------| |---------|---------------|
| **Telegram** | Easy (just a token) | | **Telegram** | Bot token from @BotFather |
| **Discord** | Easy (bot token + intents) | | **Discord** | Bot token + Message Content intent |
| **WhatsApp** | Medium (scan QR) | | **WhatsApp** | QR code scan |
| **Feishu** | Medium (app credentials) | | **Feishu** | App ID + App Secret |
| **Mochat** | Medium (claw token + websocket) | | **Mochat** | Claw token (auto-setup available) |
| **DingTalk** | Medium (app credentials) | | **DingTalk** | App Key + App Secret |
| **Slack** | Medium (bot + app tokens) | | **Slack** | Bot token + App-Level token |
| **Email** | Medium (IMAP/SMTP credentials) | | **Email** | IMAP/SMTP credentials |
| **QQ** | Easy (app credentials) | | **QQ** | App ID + App Secret |
<details> <details>
<summary><b>Telegram</b> (Recommended)</summary> <summary><b>Telegram</b> (Recommended)</summary>
@@ -572,7 +581,7 @@ Config file: `~/.nanobot/config.json`
| Provider | Purpose | Get API Key | | Provider | Purpose | Get API Key |
|----------|---------|-------------| |----------|---------|-------------|
| `custom` | Any OpenAI-compatible endpoint | — | | `custom` | Any OpenAI-compatible endpoint (direct, no LiteLLM) | — |
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) | | `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) | | `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) | | `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
@@ -581,15 +590,48 @@ Config file: `~/.nanobot/config.json`
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) | | `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) | | `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) | | `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
| `siliconflow` | LLM (SiliconFlow/硅基流动, API gateway) | [siliconflow.cn](https://siliconflow.cn) |
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) | | `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) | | `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) | | `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
| `vllm` | LLM (local, any OpenAI-compatible server) | — | | `vllm` | LLM (local, any OpenAI-compatible server) | — |
| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |
| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |
<details>
<summary><b>OpenAI Codex (OAuth)</b></summary>
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
**1. Login:**
```bash
nanobot provider login openai-codex
```
**2. Set model** (merge into `~/.nanobot/config.json`):
```json
{
"agents": {
"defaults": {
"model": "openai-codex/gpt-5.1-codex"
}
}
}
```
**3. Chat:**
```bash
nanobot agent -m "Hello!"
```
> Docker users: use `docker run -it` for interactive OAuth login.
</details>
<details> <details>
<summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary> <summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>
If your provider is not listed above but exposes an **OpenAI-compatible API** (e.g. Together AI, Fireworks, Azure OpenAI, self-hosted endpoints), use the `custom` provider: Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is.
```json ```json
{ {
@@ -607,7 +649,7 @@ If your provider is not listed above but exposes an **OpenAI-compatible API** (e
} }
``` ```
> The `custom` provider routes through LiteLLM's OpenAI-compatible path. It works with any endpoint that follows the OpenAI chat completions API format. The model name is passed directly to the endpoint without any prefix. > For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `"no-key"`).
</details> </details>
@@ -749,6 +791,7 @@ MCP tools are automatically discovered and registered on startup. The LLM can us
| `nanobot agent --logs` | Show runtime logs during chat | | `nanobot agent --logs` | Show runtime logs during chat |
| `nanobot gateway` | Start the gateway | | `nanobot gateway` | Start the gateway |
| `nanobot status` | Show status | | `nanobot status` | Show status |
| `nanobot provider login openai-codex` | OAuth login for providers |
| `nanobot channels login` | Link WhatsApp (scan QR) | | `nanobot channels login` | Link WhatsApp (scan QR) |
| `nanobot channels status` | Show channel status | | `nanobot channels status` | Show channel status |
@@ -776,7 +819,21 @@ nanobot cron remove <job_id>
> [!TIP] > [!TIP]
> The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts. > The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
Build and run nanobot in a container: ### Docker Compose
```bash
docker compose run --rm nanobot-cli onboard # first-time setup
vim ~/.nanobot/config.json # add API keys
docker compose up -d nanobot-gateway # start gateway
```
```bash
docker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
docker compose logs -f nanobot-gateway # view logs
docker compose down # stop
```
### Docker
```bash ```bash
# Build the image # Build the image

View File

@@ -5,7 +5,7 @@
If you discover a security vulnerability in nanobot, please report it by: If you discover a security vulnerability in nanobot, please report it by:
1. **DO NOT** open a public GitHub issue 1. **DO NOT** open a public GitHub issue
2. Create a private security advisory on GitHub or contact the repository maintainers 2. Create a private security advisory on GitHub or contact the repository maintainers (xubinrencs@gmail.com)
3. Include: 3. Include:
- Description of the vulnerability - Description of the vulnerability
- Steps to reproduce - Steps to reproduce

31
docker-compose.yml Normal file
View File

@@ -0,0 +1,31 @@
x-common-config: &common-config
build:
context: .
dockerfile: Dockerfile
volumes:
- ~/.nanobot:/root/.nanobot
services:
nanobot-gateway:
container_name: nanobot-gateway
<<: *common-config
command: ["gateway"]
restart: unless-stopped
ports:
- 18790:18790
deploy:
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
nanobot-cli:
<<: *common-config
profiles:
- cli
command: ["status"]
stdin_open: true
tty: true

View File

@@ -2,5 +2,5 @@
nanobot - A lightweight AI agent framework nanobot - A lightweight AI agent framework
""" """
__version__ = "0.1.0" __version__ = "0.1.4"
__logo__ = "🐈" __logo__ = "🐈"

View File

@@ -105,7 +105,7 @@ IMPORTANT: When responding to direct questions or conversations, reply directly
Only use the 'message' tool when you need to send a message to a specific chat channel (like WhatsApp). Only use the 'message' tool when you need to send a message to a specific chat channel (like WhatsApp).
For normal conversation, just respond with text - do not call the message tool. For normal conversation, just respond with text - do not call the message tool.
Always be helpful, accurate, and concise. When using tools, think step by step: what you know, what you need, and why you chose this tool. Always be helpful, accurate, and concise. Before calling tools, briefly tell the user what you're about to do (one short sentence in the user's language).
When remembering something important, write to {workspace_path}/memory/MEMORY.md When remembering something important, write to {workspace_path}/memory/MEMORY.md
To recall past events, grep {workspace_path}/memory/HISTORY.md""" To recall past events, grep {workspace_path}/memory/HISTORY.md"""
@@ -225,14 +225,18 @@ To recall past events, grep {workspace_path}/memory/HISTORY.md"""
Returns: Returns:
Updated message list. Updated message list.
""" """
msg: dict[str, Any] = {"role": "assistant", "content": content or ""} msg: dict[str, Any] = {"role": "assistant"}
# Omit empty content — some backends reject empty text blocks
if content:
msg["content"] = content
if tool_calls: if tool_calls:
msg["tool_calls"] = tool_calls msg["tool_calls"] = tool_calls
# Thinking models reject history without this # Include reasoning content when provided (required by some thinking models)
if reasoning_content: if reasoning_content:
msg["reasoning_content"] = reasoning_content msg["reasoning_content"] = reasoning_content
messages.append(msg) messages.append(msg)
return messages return messages

View File

@@ -5,7 +5,8 @@ from contextlib import AsyncExitStack
import json import json
import json_repair import json_repair
from pathlib import Path from pathlib import Path
from typing import Any import re
from typing import Any, Awaitable, Callable
from loguru import logger from loguru import logger
@@ -92,12 +93,12 @@ class AgentLoop:
def _register_default_tools(self) -> None: def _register_default_tools(self) -> None:
"""Register the default set of tools.""" """Register the default set of tools."""
# File tools (restrict to workspace if configured) # File tools (workspace for relative paths, restrict if configured)
allowed_dir = self.workspace if self.restrict_to_workspace else None allowed_dir = self.workspace if self.restrict_to_workspace else None
self.tools.register(ReadFileTool(allowed_dir=allowed_dir)) self.tools.register(ReadFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(WriteFileTool(allowed_dir=allowed_dir)) self.tools.register(WriteFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(EditFileTool(allowed_dir=allowed_dir)) self.tools.register(EditFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(ListDirTool(allowed_dir=allowed_dir)) self.tools.register(ListDirTool(workspace=self.workspace, allowed_dir=allowed_dir))
# Shell tool # Shell tool
self.tools.register(ExecTool( self.tools.register(ExecTool(
@@ -146,12 +147,34 @@ class AgentLoop:
if isinstance(cron_tool, CronTool): if isinstance(cron_tool, CronTool):
cron_tool.set_context(channel, chat_id) cron_tool.set_context(channel, chat_id)
async def _run_agent_loop(self, initial_messages: list[dict]) -> tuple[str | None, list[str]]: @staticmethod
def _strip_think(text: str | None) -> str | None:
"""Remove <think>…</think> blocks that some models embed in content."""
if not text:
return None
return re.sub(r"<think>[\s\S]*?</think>", "", text).strip() or None
@staticmethod
def _tool_hint(tool_calls: list) -> str:
"""Format tool calls as concise hint, e.g. 'web_search("query")'."""
def _fmt(tc):
val = next(iter(tc.arguments.values()), None) if tc.arguments else None
if not isinstance(val, str):
return tc.name
return f'{tc.name}("{val[:40]}")' if len(val) > 40 else f'{tc.name}("{val}")'
return ", ".join(_fmt(tc) for tc in tool_calls)
async def _run_agent_loop(
self,
initial_messages: list[dict],
on_progress: Callable[[str], Awaitable[None]] | None = None,
) -> tuple[str | None, list[str]]:
""" """
Run the agent iteration loop. Run the agent iteration loop.
Args: Args:
initial_messages: Starting messages for the LLM conversation. initial_messages: Starting messages for the LLM conversation.
on_progress: Optional callback to push intermediate content to the user.
Returns: Returns:
Tuple of (final_content, list_of_tools_used). Tuple of (final_content, list_of_tools_used).
@@ -173,13 +196,17 @@ class AgentLoop:
) )
if response.has_tool_calls: if response.has_tool_calls:
if on_progress:
clean = self._strip_think(response.content)
await on_progress(clean or self._tool_hint(response.tool_calls))
tool_call_dicts = [ tool_call_dicts = [
{ {
"id": tc.id, "id": tc.id,
"type": "function", "type": "function",
"function": { "function": {
"name": tc.name, "name": tc.name,
"arguments": json.dumps(tc.arguments) "arguments": json.dumps(tc.arguments, ensure_ascii=False)
} }
} }
for tc in response.tool_calls for tc in response.tool_calls
@@ -192,14 +219,13 @@ class AgentLoop:
for tool_call in response.tool_calls: for tool_call in response.tool_calls:
tools_used.append(tool_call.name) tools_used.append(tool_call.name)
args_str = json.dumps(tool_call.arguments, ensure_ascii=False) args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
logger.info(f"Tool call: {tool_call.name}({args_str[:200]})") logger.info("Tool call: {}({})", tool_call.name, args_str[:200])
result = await self.tools.execute(tool_call.name, tool_call.arguments) result = await self.tools.execute(tool_call.name, tool_call.arguments)
messages = self.context.add_tool_result( messages = self.context.add_tool_result(
messages, tool_call.id, tool_call.name, result messages, tool_call.id, tool_call.name, result
) )
messages.append({"role": "user", "content": "Reflect on the results and decide next steps."})
else: else:
final_content = response.content final_content = self._strip_think(response.content)
break break
return final_content, tools_used return final_content, tools_used
@@ -221,7 +247,7 @@ class AgentLoop:
if response: if response:
await self.bus.publish_outbound(response) await self.bus.publish_outbound(response)
except Exception as e: except Exception as e:
logger.error(f"Error processing message: {e}") logger.error("Error processing message: {}", e)
await self.bus.publish_outbound(OutboundMessage( await self.bus.publish_outbound(OutboundMessage(
channel=msg.channel, channel=msg.channel,
chat_id=msg.chat_id, chat_id=msg.chat_id,
@@ -244,13 +270,19 @@ class AgentLoop:
self._running = False self._running = False
logger.info("Agent loop stopping") logger.info("Agent loop stopping")
async def _process_message(self, msg: InboundMessage, session_key: str | None = None) -> OutboundMessage | None: async def _process_message(
self,
msg: InboundMessage,
session_key: str | None = None,
on_progress: Callable[[str], Awaitable[None]] | None = None,
) -> OutboundMessage | None:
""" """
Process a single inbound message. Process a single inbound message.
Args: Args:
msg: The inbound message to process. msg: The inbound message to process.
session_key: Override session key (used by process_direct). session_key: Override session key (used by process_direct).
on_progress: Optional callback for intermediate output (defaults to bus publish).
Returns: Returns:
The response message, or None if no response needed. The response message, or None if no response needed.
@@ -260,7 +292,7 @@ class AgentLoop:
return await self._process_system_message(msg) return await self._process_system_message(msg)
preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content
logger.info(f"Processing message from {msg.channel}:{msg.sender_id}: {preview}") logger.info("Processing message from {}:{}: {}", msg.channel, msg.sender_id, preview)
key = session_key or msg.session_key key = session_key or msg.session_key
session = self.sessions.get_or_create(key) session = self.sessions.get_or_create(key)
@@ -297,13 +329,22 @@ class AgentLoop:
channel=msg.channel, channel=msg.channel,
chat_id=msg.chat_id, chat_id=msg.chat_id,
) )
final_content, tools_used = await self._run_agent_loop(initial_messages)
async def _bus_progress(content: str) -> None:
await self.bus.publish_outbound(OutboundMessage(
channel=msg.channel, chat_id=msg.chat_id, content=content,
metadata=msg.metadata or {},
))
final_content, tools_used = await self._run_agent_loop(
initial_messages, on_progress=on_progress or _bus_progress,
)
if final_content is None: if final_content is None:
final_content = "I've completed processing but have no response to give." final_content = "I've completed processing but have no response to give."
preview = final_content[:120] + "..." if len(final_content) > 120 else final_content preview = final_content[:120] + "..." if len(final_content) > 120 else final_content
logger.info(f"Response to {msg.channel}:{msg.sender_id}: {preview}") logger.info("Response to {}:{}: {}", msg.channel, msg.sender_id, preview)
session.add_message("user", msg.content) session.add_message("user", msg.content)
session.add_message("assistant", final_content, session.add_message("assistant", final_content,
@@ -324,7 +365,7 @@ class AgentLoop:
The chat_id field contains "original_channel:original_chat_id" to route The chat_id field contains "original_channel:original_chat_id" to route
the response back to the correct destination. the response back to the correct destination.
""" """
logger.info(f"Processing system message from {msg.sender_id}") logger.info("Processing system message from {}", msg.sender_id)
# Parse origin from chat_id (format: "channel:chat_id") # Parse origin from chat_id (format: "channel:chat_id")
if ":" in msg.chat_id: if ":" in msg.chat_id:
@@ -372,22 +413,22 @@ class AgentLoop:
if archive_all: if archive_all:
old_messages = session.messages old_messages = session.messages
keep_count = 0 keep_count = 0
logger.info(f"Memory consolidation (archive_all): {len(session.messages)} total messages archived") logger.info("Memory consolidation (archive_all): {} total messages archived", len(session.messages))
else: else:
keep_count = self.memory_window // 2 keep_count = self.memory_window // 2
if len(session.messages) <= keep_count: if len(session.messages) <= keep_count:
logger.debug(f"Session {session.key}: No consolidation needed (messages={len(session.messages)}, keep={keep_count})") logger.debug("Session {}: No consolidation needed (messages={}, keep={})", session.key, len(session.messages), keep_count)
return return
messages_to_process = len(session.messages) - session.last_consolidated messages_to_process = len(session.messages) - session.last_consolidated
if messages_to_process <= 0: if messages_to_process <= 0:
logger.debug(f"Session {session.key}: No new messages to consolidate (last_consolidated={session.last_consolidated}, total={len(session.messages)})") logger.debug("Session {}: No new messages to consolidate (last_consolidated={}, total={})", session.key, session.last_consolidated, len(session.messages))
return return
old_messages = session.messages[session.last_consolidated:-keep_count] old_messages = session.messages[session.last_consolidated:-keep_count]
if not old_messages: if not old_messages:
return return
logger.info(f"Memory consolidation started: {len(session.messages)} total, {len(old_messages)} new to consolidate, {keep_count} keep") logger.info("Memory consolidation started: {} total, {} new to consolidate, {} keep", len(session.messages), len(old_messages), keep_count)
lines = [] lines = []
for m in old_messages: for m in old_messages:
@@ -436,7 +477,7 @@ Respond with ONLY valid JSON, no markdown fences."""
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip() text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
result = json_repair.loads(text) result = json_repair.loads(text)
if not isinstance(result, dict): if not isinstance(result, dict):
logger.warning(f"Memory consolidation: unexpected response type, skipping. Response: {text[:200]}") logger.warning("Memory consolidation: unexpected response type, skipping. Response: {}", text[:200])
return return
if entry := result.get("history_entry"): if entry := result.get("history_entry"):
@@ -455,9 +496,9 @@ Respond with ONLY valid JSON, no markdown fences."""
session.last_consolidated = 0 session.last_consolidated = 0
else: else:
session.last_consolidated = len(session.messages) - keep_count session.last_consolidated = len(session.messages) - keep_count
logger.info(f"Memory consolidation done: {len(session.messages)} messages, last_consolidated={session.last_consolidated}") logger.info("Memory consolidation done: {} messages, last_consolidated={}", len(session.messages), session.last_consolidated)
except Exception as e: except Exception as e:
logger.error(f"Memory consolidation failed: {e}") logger.error("Memory consolidation failed: {}", e)
async def process_direct( async def process_direct(
self, self,
@@ -465,6 +506,7 @@ Respond with ONLY valid JSON, no markdown fences."""
session_key: str = "cli:direct", session_key: str = "cli:direct",
channel: str = "cli", channel: str = "cli",
chat_id: str = "direct", chat_id: str = "direct",
on_progress: Callable[[str], Awaitable[None]] | None = None,
) -> str: ) -> str:
""" """
Process a message directly (for CLI or cron usage). Process a message directly (for CLI or cron usage).
@@ -474,6 +516,7 @@ Respond with ONLY valid JSON, no markdown fences."""
session_key: Session identifier (overrides channel:chat_id for session lookup). session_key: Session identifier (overrides channel:chat_id for session lookup).
channel: Source channel (for tool context routing). channel: Source channel (for tool context routing).
chat_id: Source chat ID (for tool context routing). chat_id: Source chat ID (for tool context routing).
on_progress: Optional callback for intermediate output.
Returns: Returns:
The agent's response. The agent's response.
@@ -486,5 +529,5 @@ Respond with ONLY valid JSON, no markdown fences."""
content=content content=content
) )
response = await self._process_message(msg, session_key=session_key) response = await self._process_message(msg, session_key=session_key, on_progress=on_progress)
return response.content if response else "" return response.content if response else ""

View File

@@ -167,10 +167,10 @@ class SkillsLoader:
return content return content
def _parse_nanobot_metadata(self, raw: str) -> dict: def _parse_nanobot_metadata(self, raw: str) -> dict:
"""Parse nanobot metadata JSON from frontmatter.""" """Parse skill metadata JSON from frontmatter (supports nanobot and openclaw keys)."""
try: try:
data = json.loads(raw) data = json.loads(raw)
return data.get("nanobot", {}) if isinstance(data, dict) else {} return data.get("nanobot", data.get("openclaw", {})) if isinstance(data, dict) else {}
except (json.JSONDecodeError, TypeError): except (json.JSONDecodeError, TypeError):
return {} return {}

View File

@@ -86,7 +86,7 @@ class SubagentManager:
# Cleanup when done # Cleanup when done
bg_task.add_done_callback(lambda _: self._running_tasks.pop(task_id, None)) bg_task.add_done_callback(lambda _: self._running_tasks.pop(task_id, None))
logger.info(f"Spawned subagent [{task_id}]: {display_label}") logger.info("Spawned subagent [{}]: {}", task_id, display_label)
return f"Subagent [{display_label}] started (id: {task_id}). I'll notify you when it completes." return f"Subagent [{display_label}] started (id: {task_id}). I'll notify you when it completes."
async def _run_subagent( async def _run_subagent(
@@ -97,16 +97,16 @@ class SubagentManager:
origin: dict[str, str], origin: dict[str, str],
) -> None: ) -> None:
"""Execute the subagent task and announce the result.""" """Execute the subagent task and announce the result."""
logger.info(f"Subagent [{task_id}] starting task: {label}") logger.info("Subagent [{}] starting task: {}", task_id, label)
try: try:
# Build subagent tools (no message tool, no spawn tool) # Build subagent tools (no message tool, no spawn tool)
tools = ToolRegistry() tools = ToolRegistry()
allowed_dir = self.workspace if self.restrict_to_workspace else None allowed_dir = self.workspace if self.restrict_to_workspace else None
tools.register(ReadFileTool(allowed_dir=allowed_dir)) tools.register(ReadFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(WriteFileTool(allowed_dir=allowed_dir)) tools.register(WriteFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(EditFileTool(allowed_dir=allowed_dir)) tools.register(EditFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(ListDirTool(allowed_dir=allowed_dir)) tools.register(ListDirTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(ExecTool( tools.register(ExecTool(
working_dir=str(self.workspace), working_dir=str(self.workspace),
timeout=self.exec_config.timeout, timeout=self.exec_config.timeout,
@@ -146,7 +146,7 @@ class SubagentManager:
"type": "function", "type": "function",
"function": { "function": {
"name": tc.name, "name": tc.name,
"arguments": json.dumps(tc.arguments), "arguments": json.dumps(tc.arguments, ensure_ascii=False),
}, },
} }
for tc in response.tool_calls for tc in response.tool_calls
@@ -159,8 +159,8 @@ class SubagentManager:
# Execute tools # Execute tools
for tool_call in response.tool_calls: for tool_call in response.tool_calls:
args_str = json.dumps(tool_call.arguments) args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
logger.debug(f"Subagent [{task_id}] executing: {tool_call.name} with arguments: {args_str}") logger.debug("Subagent [{}] executing: {} with arguments: {}", task_id, tool_call.name, args_str)
result = await tools.execute(tool_call.name, tool_call.arguments) result = await tools.execute(tool_call.name, tool_call.arguments)
messages.append({ messages.append({
"role": "tool", "role": "tool",
@@ -175,12 +175,12 @@ class SubagentManager:
if final_result is None: if final_result is None:
final_result = "Task completed but no final response was generated." final_result = "Task completed but no final response was generated."
logger.info(f"Subagent [{task_id}] completed successfully") logger.info("Subagent [{}] completed successfully", task_id)
await self._announce_result(task_id, label, task, final_result, origin, "ok") await self._announce_result(task_id, label, task, final_result, origin, "ok")
except Exception as e: except Exception as e:
error_msg = f"Error: {str(e)}" error_msg = f"Error: {str(e)}"
logger.error(f"Subagent [{task_id}] failed: {e}") logger.error("Subagent [{}] failed: {}", task_id, e)
await self._announce_result(task_id, label, task, error_msg, origin, "error") await self._announce_result(task_id, label, task, error_msg, origin, "error")
async def _announce_result( async def _announce_result(
@@ -213,7 +213,7 @@ Summarize this naturally for the user. Keep it brief (1-2 sentences). Do not men
) )
await self.bus.publish_inbound(msg) await self.bus.publish_inbound(msg)
logger.debug(f"Subagent [{task_id}] announced result to {origin['channel']}:{origin['chat_id']}") logger.debug("Subagent [{}] announced result to {}:{}", task_id, origin['channel'], origin['chat_id'])
def _build_subagent_prompt(self, task: str) -> str: def _build_subagent_prompt(self, task: str) -> str:
"""Build a focused system prompt for the subagent.""" """Build a focused system prompt for the subagent."""

View File

@@ -50,6 +50,10 @@ class CronTool(Tool):
"type": "string", "type": "string",
"description": "Cron expression like '0 9 * * *' (for scheduled tasks)" "description": "Cron expression like '0 9 * * *' (for scheduled tasks)"
}, },
"tz": {
"type": "string",
"description": "IANA timezone for cron expressions (e.g. 'America/Vancouver')"
},
"at": { "at": {
"type": "string", "type": "string",
"description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')" "description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')"
@@ -68,30 +72,46 @@ class CronTool(Tool):
message: str = "", message: str = "",
every_seconds: int | None = None, every_seconds: int | None = None,
cron_expr: str | None = None, cron_expr: str | None = None,
tz: str | None = None,
at: str | None = None, at: str | None = None,
job_id: str | None = None, job_id: str | None = None,
**kwargs: Any **kwargs: Any
) -> str: ) -> str:
if action == "add": if action == "add":
return self._add_job(message, every_seconds, cron_expr, at) return self._add_job(message, every_seconds, cron_expr, tz, at)
elif action == "list": elif action == "list":
return self._list_jobs() return self._list_jobs()
elif action == "remove": elif action == "remove":
return self._remove_job(job_id) return self._remove_job(job_id)
return f"Unknown action: {action}" return f"Unknown action: {action}"
def _add_job(self, message: str, every_seconds: int | None, cron_expr: str | None, at: str | None) -> str: def _add_job(
self,
message: str,
every_seconds: int | None,
cron_expr: str | None,
tz: str | None,
at: str | None,
) -> str:
if not message: if not message:
return "Error: message is required for add" return "Error: message is required for add"
if not self._channel or not self._chat_id: if not self._channel or not self._chat_id:
return "Error: no session context (channel/chat_id)" return "Error: no session context (channel/chat_id)"
if tz and not cron_expr:
return "Error: tz can only be used with cron_expr"
if tz:
from zoneinfo import ZoneInfo
try:
ZoneInfo(tz)
except (KeyError, Exception):
return f"Error: unknown timezone '{tz}'"
# Build schedule # Build schedule
delete_after = False delete_after = False
if every_seconds: if every_seconds:
schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000) schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000)
elif cron_expr: elif cron_expr:
schedule = CronSchedule(kind="cron", expr=cron_expr) schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
elif at: elif at:
from datetime import datetime from datetime import datetime
dt = datetime.fromisoformat(at) dt = datetime.fromisoformat(at)

View File

@@ -6,9 +6,12 @@ from typing import Any
from nanobot.agent.tools.base import Tool from nanobot.agent.tools.base import Tool
def _resolve_path(path: str, allowed_dir: Path | None = None) -> Path: def _resolve_path(path: str, workspace: Path | None = None, allowed_dir: Path | None = None) -> Path:
"""Resolve path and optionally enforce directory restriction.""" """Resolve path against workspace (if relative) and enforce directory restriction."""
resolved = Path(path).expanduser().resolve() p = Path(path).expanduser()
if not p.is_absolute() and workspace:
p = workspace / p
resolved = p.resolve()
if allowed_dir and not str(resolved).startswith(str(allowed_dir.resolve())): if allowed_dir and not str(resolved).startswith(str(allowed_dir.resolve())):
raise PermissionError(f"Path {path} is outside allowed directory {allowed_dir}") raise PermissionError(f"Path {path} is outside allowed directory {allowed_dir}")
return resolved return resolved
@@ -16,8 +19,9 @@ def _resolve_path(path: str, allowed_dir: Path | None = None) -> Path:
class ReadFileTool(Tool): class ReadFileTool(Tool):
"""Tool to read file contents.""" """Tool to read file contents."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -43,12 +47,12 @@ class ReadFileTool(Tool):
async def execute(self, path: str, **kwargs: Any) -> str: async def execute(self, path: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not file_path.exists(): if not file_path.exists():
return f"Error: File not found: {path}" return f"Error: File not found: {path}"
if not file_path.is_file(): if not file_path.is_file():
return f"Error: Not a file: {path}" return f"Error: Not a file: {path}"
content = file_path.read_text(encoding="utf-8") content = file_path.read_text(encoding="utf-8")
return content return content
except PermissionError as e: except PermissionError as e:
@@ -59,8 +63,9 @@ class ReadFileTool(Tool):
class WriteFileTool(Tool): class WriteFileTool(Tool):
"""Tool to write content to a file.""" """Tool to write content to a file."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -90,10 +95,10 @@ class WriteFileTool(Tool):
async def execute(self, path: str, content: str, **kwargs: Any) -> str: async def execute(self, path: str, content: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
file_path.parent.mkdir(parents=True, exist_ok=True) file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content, encoding="utf-8") file_path.write_text(content, encoding="utf-8")
return f"Successfully wrote {len(content)} bytes to {path}" return f"Successfully wrote {len(content)} bytes to {file_path}"
except PermissionError as e: except PermissionError as e:
return f"Error: {e}" return f"Error: {e}"
except Exception as e: except Exception as e:
@@ -102,8 +107,9 @@ class WriteFileTool(Tool):
class EditFileTool(Tool): class EditFileTool(Tool):
"""Tool to edit a file by replacing text.""" """Tool to edit a file by replacing text."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -137,24 +143,24 @@ class EditFileTool(Tool):
async def execute(self, path: str, old_text: str, new_text: str, **kwargs: Any) -> str: async def execute(self, path: str, old_text: str, new_text: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not file_path.exists(): if not file_path.exists():
return f"Error: File not found: {path}" return f"Error: File not found: {path}"
content = file_path.read_text(encoding="utf-8") content = file_path.read_text(encoding="utf-8")
if old_text not in content: if old_text not in content:
return f"Error: old_text not found in file. Make sure it matches exactly." return f"Error: old_text not found in file. Make sure it matches exactly."
# Count occurrences # Count occurrences
count = content.count(old_text) count = content.count(old_text)
if count > 1: if count > 1:
return f"Warning: old_text appears {count} times. Please provide more context to make it unique." return f"Warning: old_text appears {count} times. Please provide more context to make it unique."
new_content = content.replace(old_text, new_text, 1) new_content = content.replace(old_text, new_text, 1)
file_path.write_text(new_content, encoding="utf-8") file_path.write_text(new_content, encoding="utf-8")
return f"Successfully edited {path}" return f"Successfully edited {file_path}"
except PermissionError as e: except PermissionError as e:
return f"Error: {e}" return f"Error: {e}"
except Exception as e: except Exception as e:
@@ -163,8 +169,9 @@ class EditFileTool(Tool):
class ListDirTool(Tool): class ListDirTool(Tool):
"""Tool to list directory contents.""" """Tool to list directory contents."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -190,20 +197,20 @@ class ListDirTool(Tool):
async def execute(self, path: str, **kwargs: Any) -> str: async def execute(self, path: str, **kwargs: Any) -> str:
try: try:
dir_path = _resolve_path(path, self._allowed_dir) dir_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not dir_path.exists(): if not dir_path.exists():
return f"Error: Directory not found: {path}" return f"Error: Directory not found: {path}"
if not dir_path.is_dir(): if not dir_path.is_dir():
return f"Error: Not a directory: {path}" return f"Error: Not a directory: {path}"
items = [] items = []
for item in sorted(dir_path.iterdir()): for item in sorted(dir_path.iterdir()):
prefix = "📁 " if item.is_dir() else "📄 " prefix = "📁 " if item.is_dir() else "📄 "
items.append(f"{prefix}{item.name}") items.append(f"{prefix}{item.name}")
if not items: if not items:
return f"Directory {path} is empty" return f"Directory {path} is empty"
return "\n".join(items) return "\n".join(items)
except PermissionError as e: except PermissionError as e:
return f"Error: {e}" return f"Error: {e}"

View File

@@ -63,7 +63,7 @@ async def connect_mcp_servers(
streamable_http_client(cfg.url) streamable_http_client(cfg.url)
) )
else: else:
logger.warning(f"MCP server '{name}': no command or url configured, skipping") logger.warning("MCP server '{}': no command or url configured, skipping", name)
continue continue
session = await stack.enter_async_context(ClientSession(read, write)) session = await stack.enter_async_context(ClientSession(read, write))
@@ -73,8 +73,8 @@ async def connect_mcp_servers(
for tool_def in tools.tools: for tool_def in tools.tools:
wrapper = MCPToolWrapper(session, name, tool_def) wrapper = MCPToolWrapper(session, name, tool_def)
registry.register(wrapper) registry.register(wrapper)
logger.debug(f"MCP: registered tool '{wrapper.name}' from server '{name}'") logger.debug("MCP: registered tool '{}' from server '{}'", wrapper.name, name)
logger.info(f"MCP server '{name}': connected, {len(tools.tools)} tools registered") logger.info("MCP server '{}': connected, {} tools registered", name, len(tools.tools))
except Exception as e: except Exception as e:
logger.error(f"MCP server '{name}': failed to connect: {e}") logger.error("MCP server '{}': failed to connect: {}", name, e)

View File

@@ -52,6 +52,11 @@ class MessageTool(Tool):
"chat_id": { "chat_id": {
"type": "string", "type": "string",
"description": "Optional: target chat/user ID" "description": "Optional: target chat/user ID"
},
"media": {
"type": "array",
"items": {"type": "string"},
"description": "Optional: list of file paths to attach (images, audio, documents)"
} }
}, },
"required": ["content"] "required": ["content"]
@@ -62,6 +67,7 @@ class MessageTool(Tool):
content: str, content: str,
channel: str | None = None, channel: str | None = None,
chat_id: str | None = None, chat_id: str | None = None,
media: list[str] | None = None,
**kwargs: Any **kwargs: Any
) -> str: ) -> str:
channel = channel or self._default_channel channel = channel or self._default_channel
@@ -76,11 +82,13 @@ class MessageTool(Tool):
msg = OutboundMessage( msg = OutboundMessage(
channel=channel, channel=channel,
chat_id=chat_id, chat_id=chat_id,
content=content content=content,
media=media or []
) )
try: try:
await self._send_callback(msg) await self._send_callback(msg)
return f"Message sent to {channel}:{chat_id}" media_info = f" with {len(media)} attachments" if media else ""
return f"Message sent to {channel}:{chat_id}{media_info}"
except Exception as e: except Exception as e:
return f"Error sending message: {str(e)}" return f"Error sending message: {str(e)}"

View File

@@ -26,7 +26,8 @@ class ExecTool(Tool):
r"\brm\s+-[rf]{1,2}\b", # rm -r, rm -rf, rm -fr r"\brm\s+-[rf]{1,2}\b", # rm -r, rm -rf, rm -fr
r"\bdel\s+/[fq]\b", # del /f, del /q r"\bdel\s+/[fq]\b", # del /f, del /q
r"\brmdir\s+/s\b", # rmdir /s r"\brmdir\s+/s\b", # rmdir /s
r"\b(format|mkfs|diskpart)\b", # disk operations r"(?:^|[;&|]\s*)format\b", # format (as standalone command only)
r"\b(mkfs|diskpart)\b", # disk operations
r"\bdd\s+if=", # dd r"\bdd\s+if=", # dd
r">\s*/dev/sd", # write to disk r">\s*/dev/sd", # write to disk
r"\b(shutdown|reboot|poweroff)\b", # system power r"\b(shutdown|reboot|poweroff)\b", # system power
@@ -81,6 +82,12 @@ class ExecTool(Tool):
) )
except asyncio.TimeoutError: except asyncio.TimeoutError:
process.kill() process.kill()
# Wait for the process to fully terminate so pipes are
# drained and file descriptors are released.
try:
await asyncio.wait_for(process.wait(), timeout=5.0)
except asyncio.TimeoutError:
pass
return f"Error: Command timed out after {self.timeout} seconds" return f"Error: Command timed out after {self.timeout} seconds"
output_parts = [] output_parts = []

View File

@@ -116,7 +116,7 @@ class WebFetchTool(Tool):
# Validate URL before fetching # Validate URL before fetching
is_valid, error_msg = _validate_url(url) is_valid, error_msg = _validate_url(url)
if not is_valid: if not is_valid:
return json.dumps({"error": f"URL validation failed: {error_msg}", "url": url}) return json.dumps({"error": f"URL validation failed: {error_msg}", "url": url}, ensure_ascii=False)
try: try:
async with httpx.AsyncClient( async with httpx.AsyncClient(
@@ -131,7 +131,7 @@ class WebFetchTool(Tool):
# JSON # JSON
if "application/json" in ctype: if "application/json" in ctype:
text, extractor = json.dumps(r.json(), indent=2), "json" text, extractor = json.dumps(r.json(), indent=2, ensure_ascii=False), "json"
# HTML # HTML
elif "text/html" in ctype or r.text[:256].lower().startswith(("<!doctype", "<html")): elif "text/html" in ctype or r.text[:256].lower().startswith(("<!doctype", "<html")):
doc = Document(r.text) doc = Document(r.text)
@@ -146,9 +146,9 @@ class WebFetchTool(Tool):
text = text[:max_chars] text = text[:max_chars]
return json.dumps({"url": url, "finalUrl": str(r.url), "status": r.status_code, return json.dumps({"url": url, "finalUrl": str(r.url), "status": r.status_code,
"extractor": extractor, "truncated": truncated, "length": len(text), "text": text}) "extractor": extractor, "truncated": truncated, "length": len(text), "text": text}, ensure_ascii=False)
except Exception as e: except Exception as e:
return json.dumps({"error": str(e), "url": url}) return json.dumps({"error": str(e), "url": url}, ensure_ascii=False)
def _to_markdown(self, html: str) -> str: def _to_markdown(self, html: str) -> str:
"""Convert HTML to markdown.""" """Convert HTML to markdown."""

View File

@@ -1,9 +1,6 @@
"""Async message queue for decoupled channel-agent communication.""" """Async message queue for decoupled channel-agent communication."""
import asyncio import asyncio
from typing import Callable, Awaitable
from loguru import logger
from nanobot.bus.events import InboundMessage, OutboundMessage from nanobot.bus.events import InboundMessage, OutboundMessage
@@ -11,70 +8,36 @@ from nanobot.bus.events import InboundMessage, OutboundMessage
class MessageBus: class MessageBus:
""" """
Async message bus that decouples chat channels from the agent core. Async message bus that decouples chat channels from the agent core.
Channels push messages to the inbound queue, and the agent processes Channels push messages to the inbound queue, and the agent processes
them and pushes responses to the outbound queue. them and pushes responses to the outbound queue.
""" """
def __init__(self): def __init__(self):
self.inbound: asyncio.Queue[InboundMessage] = asyncio.Queue() self.inbound: asyncio.Queue[InboundMessage] = asyncio.Queue()
self.outbound: asyncio.Queue[OutboundMessage] = asyncio.Queue() self.outbound: asyncio.Queue[OutboundMessage] = asyncio.Queue()
self._outbound_subscribers: dict[str, list[Callable[[OutboundMessage], Awaitable[None]]]] = {}
self._running = False
async def publish_inbound(self, msg: InboundMessage) -> None: async def publish_inbound(self, msg: InboundMessage) -> None:
"""Publish a message from a channel to the agent.""" """Publish a message from a channel to the agent."""
await self.inbound.put(msg) await self.inbound.put(msg)
async def consume_inbound(self) -> InboundMessage: async def consume_inbound(self) -> InboundMessage:
"""Consume the next inbound message (blocks until available).""" """Consume the next inbound message (blocks until available)."""
return await self.inbound.get() return await self.inbound.get()
async def publish_outbound(self, msg: OutboundMessage) -> None: async def publish_outbound(self, msg: OutboundMessage) -> None:
"""Publish a response from the agent to channels.""" """Publish a response from the agent to channels."""
await self.outbound.put(msg) await self.outbound.put(msg)
async def consume_outbound(self) -> OutboundMessage: async def consume_outbound(self) -> OutboundMessage:
"""Consume the next outbound message (blocks until available).""" """Consume the next outbound message (blocks until available)."""
return await self.outbound.get() return await self.outbound.get()
def subscribe_outbound(
self,
channel: str,
callback: Callable[[OutboundMessage], Awaitable[None]]
) -> None:
"""Subscribe to outbound messages for a specific channel."""
if channel not in self._outbound_subscribers:
self._outbound_subscribers[channel] = []
self._outbound_subscribers[channel].append(callback)
async def dispatch_outbound(self) -> None:
"""
Dispatch outbound messages to subscribed channels.
Run this as a background task.
"""
self._running = True
while self._running:
try:
msg = await asyncio.wait_for(self.outbound.get(), timeout=1.0)
subscribers = self._outbound_subscribers.get(msg.channel, [])
for callback in subscribers:
try:
await callback(msg)
except Exception as e:
logger.error(f"Error dispatching to {msg.channel}: {e}")
except asyncio.TimeoutError:
continue
def stop(self) -> None:
"""Stop the dispatcher loop."""
self._running = False
@property @property
def inbound_size(self) -> int: def inbound_size(self) -> int:
"""Number of pending inbound messages.""" """Number of pending inbound messages."""
return self.inbound.qsize() return self.inbound.qsize()
@property @property
def outbound_size(self) -> int: def outbound_size(self) -> int:
"""Number of pending outbound messages.""" """Number of pending outbound messages."""

View File

@@ -65,7 +65,7 @@ class NanobotDingTalkHandler(CallbackHandler):
sender_id = chatbot_msg.sender_staff_id or chatbot_msg.sender_id sender_id = chatbot_msg.sender_staff_id or chatbot_msg.sender_id
sender_name = chatbot_msg.sender_nick or "Unknown" sender_name = chatbot_msg.sender_nick or "Unknown"
logger.info(f"Received DingTalk message from {sender_name} ({sender_id}): {content}") logger.info("Received DingTalk message from {} ({}): {}", sender_name, sender_id, content)
# Forward to Nanobot via _on_message (non-blocking). # Forward to Nanobot via _on_message (non-blocking).
# Store reference to prevent GC before task completes. # Store reference to prevent GC before task completes.
@@ -78,7 +78,7 @@ class NanobotDingTalkHandler(CallbackHandler):
return AckMessage.STATUS_OK, "OK" return AckMessage.STATUS_OK, "OK"
except Exception as e: except Exception as e:
logger.error(f"Error processing DingTalk message: {e}") logger.error("Error processing DingTalk message: {}", e)
# Return OK to avoid retry loop from DingTalk server # Return OK to avoid retry loop from DingTalk server
return AckMessage.STATUS_OK, "Error" return AckMessage.STATUS_OK, "Error"
@@ -142,13 +142,13 @@ class DingTalkChannel(BaseChannel):
try: try:
await self._client.start() await self._client.start()
except Exception as e: except Exception as e:
logger.warning(f"DingTalk stream error: {e}") logger.warning("DingTalk stream error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting DingTalk stream in 5 seconds...") logger.info("Reconnecting DingTalk stream in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
except Exception as e: except Exception as e:
logger.exception(f"Failed to start DingTalk channel: {e}") logger.exception("Failed to start DingTalk channel: {}", e)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop the DingTalk bot.""" """Stop the DingTalk bot."""
@@ -186,7 +186,7 @@ class DingTalkChannel(BaseChannel):
self._token_expiry = time.time() + int(res_data.get("expireIn", 7200)) - 60 self._token_expiry = time.time() + int(res_data.get("expireIn", 7200)) - 60
return self._access_token return self._access_token
except Exception as e: except Exception as e:
logger.error(f"Failed to get DingTalk access token: {e}") logger.error("Failed to get DingTalk access token: {}", e)
return None return None
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
@@ -208,7 +208,7 @@ class DingTalkChannel(BaseChannel):
"msgParam": json.dumps({ "msgParam": json.dumps({
"text": msg.content, "text": msg.content,
"title": "Nanobot Reply", "title": "Nanobot Reply",
}), }, ensure_ascii=False),
} }
if not self._http: if not self._http:
@@ -218,11 +218,11 @@ class DingTalkChannel(BaseChannel):
try: try:
resp = await self._http.post(url, json=data, headers=headers) resp = await self._http.post(url, json=data, headers=headers)
if resp.status_code != 200: if resp.status_code != 200:
logger.error(f"DingTalk send failed: {resp.text}") logger.error("DingTalk send failed: {}", resp.text)
else: else:
logger.debug(f"DingTalk message sent to {msg.chat_id}") logger.debug("DingTalk message sent to {}", msg.chat_id)
except Exception as e: except Exception as e:
logger.error(f"Error sending DingTalk message: {e}") logger.error("Error sending DingTalk message: {}", e)
async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None: async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None:
"""Handle incoming message (called by NanobotDingTalkHandler). """Handle incoming message (called by NanobotDingTalkHandler).
@@ -231,7 +231,7 @@ class DingTalkChannel(BaseChannel):
permission checks before publishing to the bus. permission checks before publishing to the bus.
""" """
try: try:
logger.info(f"DingTalk inbound: {content} from {sender_name}") logger.info("DingTalk inbound: {} from {}", content, sender_name)
await self._handle_message( await self._handle_message(
sender_id=sender_id, sender_id=sender_id,
chat_id=sender_id, # For private chat, chat_id == sender_id chat_id=sender_id, # For private chat, chat_id == sender_id
@@ -242,4 +242,4 @@ class DingTalkChannel(BaseChannel):
}, },
) )
except Exception as e: except Exception as e:
logger.error(f"Error publishing DingTalk message: {e}") logger.error("Error publishing DingTalk message: {}", e)

View File

@@ -51,7 +51,7 @@ class DiscordChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Discord gateway error: {e}") logger.warning("Discord gateway error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting to Discord gateway in 5 seconds...") logger.info("Reconnecting to Discord gateway in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
@@ -94,14 +94,14 @@ class DiscordChannel(BaseChannel):
if response.status_code == 429: if response.status_code == 429:
data = response.json() data = response.json()
retry_after = float(data.get("retry_after", 1.0)) retry_after = float(data.get("retry_after", 1.0))
logger.warning(f"Discord rate limited, retrying in {retry_after}s") logger.warning("Discord rate limited, retrying in {}s", retry_after)
await asyncio.sleep(retry_after) await asyncio.sleep(retry_after)
continue continue
response.raise_for_status() response.raise_for_status()
return return
except Exception as e: except Exception as e:
if attempt == 2: if attempt == 2:
logger.error(f"Error sending Discord message: {e}") logger.error("Error sending Discord message: {}", e)
else: else:
await asyncio.sleep(1) await asyncio.sleep(1)
finally: finally:
@@ -116,7 +116,7 @@ class DiscordChannel(BaseChannel):
try: try:
data = json.loads(raw) data = json.loads(raw)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning(f"Invalid JSON from Discord gateway: {raw[:100]}") logger.warning("Invalid JSON from Discord gateway: {}", raw[:100])
continue continue
op = data.get("op") op = data.get("op")
@@ -175,7 +175,7 @@ class DiscordChannel(BaseChannel):
try: try:
await self._ws.send(json.dumps(payload)) await self._ws.send(json.dumps(payload))
except Exception as e: except Exception as e:
logger.warning(f"Discord heartbeat failed: {e}") logger.warning("Discord heartbeat failed: {}", e)
break break
await asyncio.sleep(interval_s) await asyncio.sleep(interval_s)
@@ -219,7 +219,7 @@ class DiscordChannel(BaseChannel):
media_paths.append(str(file_path)) media_paths.append(str(file_path))
content_parts.append(f"[attachment: {file_path}]") content_parts.append(f"[attachment: {file_path}]")
except Exception as e: except Exception as e:
logger.warning(f"Failed to download Discord attachment: {e}") logger.warning("Failed to download Discord attachment: {}", e)
content_parts.append(f"[attachment: {filename} - download failed]") content_parts.append(f"[attachment: {filename} - download failed]")
reply_to = (payload.get("referenced_message") or {}).get("id") reply_to = (payload.get("referenced_message") or {}).get("id")

View File

@@ -94,7 +94,7 @@ class EmailChannel(BaseChannel):
metadata=item.get("metadata", {}), metadata=item.get("metadata", {}),
) )
except Exception as e: except Exception as e:
logger.error(f"Email polling error: {e}") logger.error("Email polling error: {}", e)
await asyncio.sleep(poll_seconds) await asyncio.sleep(poll_seconds)
@@ -143,7 +143,7 @@ class EmailChannel(BaseChannel):
try: try:
await asyncio.to_thread(self._smtp_send, email_msg) await asyncio.to_thread(self._smtp_send, email_msg)
except Exception as e: except Exception as e:
logger.error(f"Error sending email to {to_addr}: {e}") logger.error("Error sending email to {}: {}", to_addr, e)
raise raise
def _validate_config(self) -> bool: def _validate_config(self) -> bool:
@@ -162,7 +162,7 @@ class EmailChannel(BaseChannel):
missing.append("smtp_password") missing.append("smtp_password")
if missing: if missing:
logger.error(f"Email channel not configured, missing: {', '.join(missing)}") logger.error("Email channel not configured, missing: {}", ', '.join(missing))
return False return False
return True return True

View File

@@ -2,6 +2,7 @@
import asyncio import asyncio
import json import json
import os
import re import re
import threading import threading
from collections import OrderedDict from collections import OrderedDict
@@ -17,6 +18,10 @@ from nanobot.config.schema import FeishuConfig
try: try:
import lark_oapi as lark import lark_oapi as lark
from lark_oapi.api.im.v1 import ( from lark_oapi.api.im.v1 import (
CreateFileRequest,
CreateFileRequestBody,
CreateImageRequest,
CreateImageRequestBody,
CreateMessageRequest, CreateMessageRequest,
CreateMessageRequestBody, CreateMessageRequestBody,
CreateMessageReactionRequest, CreateMessageReactionRequest,
@@ -151,7 +156,7 @@ class FeishuChannel(BaseChannel):
try: try:
self._ws_client.start() self._ws_client.start()
except Exception as e: except Exception as e:
logger.warning(f"Feishu WebSocket error: {e}") logger.warning("Feishu WebSocket error: {}", e)
if self._running: if self._running:
import time; time.sleep(5) import time; time.sleep(5)
@@ -172,7 +177,7 @@ class FeishuChannel(BaseChannel):
try: try:
self._ws_client.stop() self._ws_client.stop()
except Exception as e: except Exception as e:
logger.warning(f"Error stopping WebSocket client: {e}") logger.warning("Error stopping WebSocket client: {}", e)
logger.info("Feishu bot stopped") logger.info("Feishu bot stopped")
def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None: def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None:
@@ -189,11 +194,11 @@ class FeishuChannel(BaseChannel):
response = self._client.im.v1.message_reaction.create(request) response = self._client.im.v1.message_reaction.create(request)
if not response.success(): if not response.success():
logger.warning(f"Failed to add reaction: code={response.code}, msg={response.msg}") logger.warning("Failed to add reaction: code={}, msg={}", response.code, response.msg)
else: else:
logger.debug(f"Added {emoji_type} reaction to message {message_id}") logger.debug("Added {} reaction to message {}", emoji_type, message_id)
except Exception as e: except Exception as e:
logger.warning(f"Error adding reaction: {e}") logger.warning("Error adding reaction: {}", e)
async def _add_reaction(self, message_id: str, emoji_type: str = "THUMBSUP") -> None: async def _add_reaction(self, message_id: str, emoji_type: str = "THUMBSUP") -> None:
""" """
@@ -263,7 +268,6 @@ class FeishuChannel(BaseChannel):
before = protected[last_end:m.start()].strip() before = protected[last_end:m.start()].strip()
if before: if before:
elements.append({"tag": "markdown", "content": before}) elements.append({"tag": "markdown", "content": before})
level = len(m.group(1))
text = m.group(2).strip() text = m.group(2).strip()
elements.append({ elements.append({
"tag": "div", "tag": "div",
@@ -284,50 +288,128 @@ class FeishuChannel(BaseChannel):
return elements or [{"tag": "markdown", "content": content}] return elements or [{"tag": "markdown", "content": content}]
async def send(self, msg: OutboundMessage) -> None: _IMAGE_EXTS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".webp", ".ico", ".tiff", ".tif"}
"""Send a message through Feishu.""" _AUDIO_EXTS = {".opus"}
if not self._client: _FILE_TYPE_MAP = {
logger.warning("Feishu client not initialized") ".opus": "opus", ".mp4": "mp4", ".pdf": "pdf", ".doc": "doc", ".docx": "doc",
return ".xls": "xls", ".xlsx": "xls", ".ppt": "ppt", ".pptx": "ppt",
}
def _upload_image_sync(self, file_path: str) -> str | None:
"""Upload an image to Feishu and return the image_key."""
try:
with open(file_path, "rb") as f:
request = CreateImageRequest.builder() \
.request_body(
CreateImageRequestBody.builder()
.image_type("message")
.image(f)
.build()
).build()
response = self._client.im.v1.image.create(request)
if response.success():
image_key = response.data.image_key
logger.debug("Uploaded image {}: {}", os.path.basename(file_path), image_key)
return image_key
else:
logger.error("Failed to upload image: code={}, msg={}", response.code, response.msg)
return None
except Exception as e:
logger.error("Error uploading image {}: {}", file_path, e)
return None
def _upload_file_sync(self, file_path: str) -> str | None:
"""Upload a file to Feishu and return the file_key."""
ext = os.path.splitext(file_path)[1].lower()
file_type = self._FILE_TYPE_MAP.get(ext, "stream")
file_name = os.path.basename(file_path)
try:
with open(file_path, "rb") as f:
request = CreateFileRequest.builder() \
.request_body(
CreateFileRequestBody.builder()
.file_type(file_type)
.file_name(file_name)
.file(f)
.build()
).build()
response = self._client.im.v1.file.create(request)
if response.success():
file_key = response.data.file_key
logger.debug("Uploaded file {}: {}", file_name, file_key)
return file_key
else:
logger.error("Failed to upload file: code={}, msg={}", response.code, response.msg)
return None
except Exception as e:
logger.error("Error uploading file {}: {}", file_path, e)
return None
def _send_message_sync(self, receive_id_type: str, receive_id: str, msg_type: str, content: str) -> bool:
"""Send a single message (text/image/file/interactive) synchronously."""
try: try:
# Determine receive_id_type based on chat_id format
# open_id starts with "ou_", chat_id starts with "oc_"
if msg.chat_id.startswith("oc_"):
receive_id_type = "chat_id"
else:
receive_id_type = "open_id"
# Build card with markdown + table support
elements = self._build_card_elements(msg.content)
card = {
"config": {"wide_screen_mode": True},
"elements": elements,
}
content = json.dumps(card, ensure_ascii=False)
request = CreateMessageRequest.builder() \ request = CreateMessageRequest.builder() \
.receive_id_type(receive_id_type) \ .receive_id_type(receive_id_type) \
.request_body( .request_body(
CreateMessageRequestBody.builder() CreateMessageRequestBody.builder()
.receive_id(msg.chat_id) .receive_id(receive_id)
.msg_type("interactive") .msg_type(msg_type)
.content(content) .content(content)
.build() .build()
).build() ).build()
response = self._client.im.v1.message.create(request) response = self._client.im.v1.message.create(request)
if not response.success(): if not response.success():
logger.error( logger.error(
f"Failed to send Feishu message: code={response.code}, " "Failed to send Feishu {} message: code={}, msg={}, log_id={}",
f"msg={response.msg}, log_id={response.get_log_id()}" msg_type, response.code, response.msg, response.get_log_id()
) )
else: return False
logger.debug(f"Feishu message sent to {msg.chat_id}") logger.debug("Feishu {} message sent to {}", msg_type, receive_id)
return True
except Exception as e: except Exception as e:
logger.error(f"Error sending Feishu message: {e}") logger.error("Error sending Feishu {} message: {}", msg_type, e)
return False
async def send(self, msg: OutboundMessage) -> None:
"""Send a message through Feishu, including media (images/files) if present."""
if not self._client:
logger.warning("Feishu client not initialized")
return
try:
receive_id_type = "chat_id" if msg.chat_id.startswith("oc_") else "open_id"
loop = asyncio.get_running_loop()
for file_path in msg.media:
if not os.path.isfile(file_path):
logger.warning("Media file not found: {}", file_path)
continue
ext = os.path.splitext(file_path)[1].lower()
if ext in self._IMAGE_EXTS:
key = await loop.run_in_executor(None, self._upload_image_sync, file_path)
if key:
await loop.run_in_executor(
None, self._send_message_sync,
receive_id_type, msg.chat_id, "image", json.dumps({"image_key": key}, ensure_ascii=False),
)
else:
key = await loop.run_in_executor(None, self._upload_file_sync, file_path)
if key:
media_type = "audio" if ext in self._AUDIO_EXTS else "file"
await loop.run_in_executor(
None, self._send_message_sync,
receive_id_type, msg.chat_id, media_type, json.dumps({"file_key": key}, ensure_ascii=False),
)
if msg.content and msg.content.strip():
card = {"config": {"wide_screen_mode": True}, "elements": self._build_card_elements(msg.content)}
await loop.run_in_executor(
None, self._send_message_sync,
receive_id_type, msg.chat_id, "interactive", json.dumps(card, ensure_ascii=False),
)
except Exception as e:
logger.error("Error sending Feishu message: {}", e)
def _on_message_sync(self, data: "P2ImMessageReceiveV1") -> None: def _on_message_sync(self, data: "P2ImMessageReceiveV1") -> None:
""" """
@@ -399,4 +481,4 @@ class FeishuChannel(BaseChannel):
) )
except Exception as e: except Exception as e:
logger.error(f"Error processing Feishu message: {e}") logger.error("Error processing Feishu message: {}", e)

View File

@@ -45,7 +45,7 @@ class ChannelManager:
) )
logger.info("Telegram channel enabled") logger.info("Telegram channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Telegram channel not available: {e}") logger.warning("Telegram channel not available: {}", e)
# WhatsApp channel # WhatsApp channel
if self.config.channels.whatsapp.enabled: if self.config.channels.whatsapp.enabled:
@@ -56,7 +56,7 @@ class ChannelManager:
) )
logger.info("WhatsApp channel enabled") logger.info("WhatsApp channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"WhatsApp channel not available: {e}") logger.warning("WhatsApp channel not available: {}", e)
# Discord channel # Discord channel
if self.config.channels.discord.enabled: if self.config.channels.discord.enabled:
@@ -67,7 +67,7 @@ class ChannelManager:
) )
logger.info("Discord channel enabled") logger.info("Discord channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Discord channel not available: {e}") logger.warning("Discord channel not available: {}", e)
# Feishu channel # Feishu channel
if self.config.channels.feishu.enabled: if self.config.channels.feishu.enabled:
@@ -78,7 +78,7 @@ class ChannelManager:
) )
logger.info("Feishu channel enabled") logger.info("Feishu channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Feishu channel not available: {e}") logger.warning("Feishu channel not available: {}", e)
# Mochat channel # Mochat channel
if self.config.channels.mochat.enabled: if self.config.channels.mochat.enabled:
@@ -90,7 +90,7 @@ class ChannelManager:
) )
logger.info("Mochat channel enabled") logger.info("Mochat channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Mochat channel not available: {e}") logger.warning("Mochat channel not available: {}", e)
# DingTalk channel # DingTalk channel
if self.config.channels.dingtalk.enabled: if self.config.channels.dingtalk.enabled:
@@ -101,7 +101,7 @@ class ChannelManager:
) )
logger.info("DingTalk channel enabled") logger.info("DingTalk channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"DingTalk channel not available: {e}") logger.warning("DingTalk channel not available: {}", e)
# Email channel # Email channel
if self.config.channels.email.enabled: if self.config.channels.email.enabled:
@@ -112,7 +112,7 @@ class ChannelManager:
) )
logger.info("Email channel enabled") logger.info("Email channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Email channel not available: {e}") logger.warning("Email channel not available: {}", e)
# Slack channel # Slack channel
if self.config.channels.slack.enabled: if self.config.channels.slack.enabled:
@@ -123,7 +123,7 @@ class ChannelManager:
) )
logger.info("Slack channel enabled") logger.info("Slack channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Slack channel not available: {e}") logger.warning("Slack channel not available: {}", e)
# QQ channel # QQ channel
if self.config.channels.qq.enabled: if self.config.channels.qq.enabled:
@@ -135,14 +135,14 @@ class ChannelManager:
) )
logger.info("QQ channel enabled") logger.info("QQ channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"QQ channel not available: {e}") logger.warning("QQ channel not available: {}", e)
async def _start_channel(self, name: str, channel: BaseChannel) -> None: async def _start_channel(self, name: str, channel: BaseChannel) -> None:
"""Start a channel and log any exceptions.""" """Start a channel and log any exceptions."""
try: try:
await channel.start() await channel.start()
except Exception as e: except Exception as e:
logger.error(f"Failed to start channel {name}: {e}") logger.error("Failed to start channel {}: {}", name, e)
async def start_all(self) -> None: async def start_all(self) -> None:
"""Start all channels and the outbound dispatcher.""" """Start all channels and the outbound dispatcher."""
@@ -156,7 +156,7 @@ class ChannelManager:
# Start channels # Start channels
tasks = [] tasks = []
for name, channel in self.channels.items(): for name, channel in self.channels.items():
logger.info(f"Starting {name} channel...") logger.info("Starting {} channel...", name)
tasks.append(asyncio.create_task(self._start_channel(name, channel))) tasks.append(asyncio.create_task(self._start_channel(name, channel)))
# Wait for all to complete (they should run forever) # Wait for all to complete (they should run forever)
@@ -178,9 +178,9 @@ class ChannelManager:
for name, channel in self.channels.items(): for name, channel in self.channels.items():
try: try:
await channel.stop() await channel.stop()
logger.info(f"Stopped {name} channel") logger.info("Stopped {} channel", name)
except Exception as e: except Exception as e:
logger.error(f"Error stopping {name}: {e}") logger.error("Error stopping {}: {}", name, e)
async def _dispatch_outbound(self) -> None: async def _dispatch_outbound(self) -> None:
"""Dispatch outbound messages to the appropriate channel.""" """Dispatch outbound messages to the appropriate channel."""
@@ -198,9 +198,9 @@ class ChannelManager:
try: try:
await channel.send(msg) await channel.send(msg)
except Exception as e: except Exception as e:
logger.error(f"Error sending to {msg.channel}: {e}") logger.error("Error sending to {}: {}", msg.channel, e)
else: else:
logger.warning(f"Unknown channel: {msg.channel}") logger.warning("Unknown channel: {}", msg.channel)
except asyncio.TimeoutError: except asyncio.TimeoutError:
continue continue

View File

@@ -322,7 +322,7 @@ class MochatChannel(BaseChannel):
await self._api_send("/api/claw/sessions/send", "sessionId", target.id, await self._api_send("/api/claw/sessions/send", "sessionId", target.id,
content, msg.reply_to) content, msg.reply_to)
except Exception as e: except Exception as e:
logger.error(f"Failed to send Mochat message: {e}") logger.error("Failed to send Mochat message: {}", e)
# ---- config / init helpers --------------------------------------------- # ---- config / init helpers ---------------------------------------------
@@ -380,7 +380,7 @@ class MochatChannel(BaseChannel):
@client.event @client.event
async def connect_error(data: Any) -> None: async def connect_error(data: Any) -> None:
logger.error(f"Mochat websocket connect error: {data}") logger.error("Mochat websocket connect error: {}", data)
@client.on("claw.session.events") @client.on("claw.session.events")
async def on_session_events(payload: dict[str, Any]) -> None: async def on_session_events(payload: dict[str, Any]) -> None:
@@ -407,7 +407,7 @@ class MochatChannel(BaseChannel):
) )
return True return True
except Exception as e: except Exception as e:
logger.error(f"Failed to connect Mochat websocket: {e}") logger.error("Failed to connect Mochat websocket: {}", e)
try: try:
await client.disconnect() await client.disconnect()
except Exception: except Exception:
@@ -444,7 +444,7 @@ class MochatChannel(BaseChannel):
"limit": self.config.watch_limit, "limit": self.config.watch_limit,
}) })
if not ack.get("result"): if not ack.get("result"):
logger.error(f"Mochat subscribeSessions failed: {ack.get('message', 'unknown error')}") logger.error("Mochat subscribeSessions failed: {}", ack.get('message', 'unknown error'))
return False return False
data = ack.get("data") data = ack.get("data")
@@ -466,7 +466,7 @@ class MochatChannel(BaseChannel):
return True return True
ack = await self._socket_call("com.claw.im.subscribePanels", {"panelIds": panel_ids}) ack = await self._socket_call("com.claw.im.subscribePanels", {"panelIds": panel_ids})
if not ack.get("result"): if not ack.get("result"):
logger.error(f"Mochat subscribePanels failed: {ack.get('message', 'unknown error')}") logger.error("Mochat subscribePanels failed: {}", ack.get('message', 'unknown error'))
return False return False
return True return True
@@ -488,7 +488,7 @@ class MochatChannel(BaseChannel):
try: try:
await self._refresh_targets(subscribe_new=self._ws_ready) await self._refresh_targets(subscribe_new=self._ws_ready)
except Exception as e: except Exception as e:
logger.warning(f"Mochat refresh failed: {e}") logger.warning("Mochat refresh failed: {}", e)
if self._fallback_mode: if self._fallback_mode:
await self._ensure_fallback_workers() await self._ensure_fallback_workers()
@@ -502,7 +502,7 @@ class MochatChannel(BaseChannel):
try: try:
response = await self._post_json("/api/claw/sessions/list", {}) response = await self._post_json("/api/claw/sessions/list", {})
except Exception as e: except Exception as e:
logger.warning(f"Mochat listSessions failed: {e}") logger.warning("Mochat listSessions failed: {}", e)
return return
sessions = response.get("sessions") sessions = response.get("sessions")
@@ -536,7 +536,7 @@ class MochatChannel(BaseChannel):
try: try:
response = await self._post_json("/api/claw/groups/get", {}) response = await self._post_json("/api/claw/groups/get", {})
except Exception as e: except Exception as e:
logger.warning(f"Mochat getWorkspaceGroup failed: {e}") logger.warning("Mochat getWorkspaceGroup failed: {}", e)
return return
raw_panels = response.get("panels") raw_panels = response.get("panels")
@@ -598,7 +598,7 @@ class MochatChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Mochat watch fallback error ({session_id}): {e}") logger.warning("Mochat watch fallback error ({}): {}", session_id, e)
await asyncio.sleep(max(0.1, self.config.retry_delay_ms / 1000.0)) await asyncio.sleep(max(0.1, self.config.retry_delay_ms / 1000.0))
async def _panel_poll_worker(self, panel_id: str) -> None: async def _panel_poll_worker(self, panel_id: str) -> None:
@@ -625,7 +625,7 @@ class MochatChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Mochat panel polling error ({panel_id}): {e}") logger.warning("Mochat panel polling error ({}): {}", panel_id, e)
await asyncio.sleep(sleep_s) await asyncio.sleep(sleep_s)
# ---- inbound event processing ------------------------------------------ # ---- inbound event processing ------------------------------------------
@@ -836,7 +836,7 @@ class MochatChannel(BaseChannel):
try: try:
data = json.loads(self._cursor_path.read_text("utf-8")) data = json.loads(self._cursor_path.read_text("utf-8"))
except Exception as e: except Exception as e:
logger.warning(f"Failed to read Mochat cursor file: {e}") logger.warning("Failed to read Mochat cursor file: {}", e)
return return
cursors = data.get("cursors") if isinstance(data, dict) else None cursors = data.get("cursors") if isinstance(data, dict) else None
if isinstance(cursors, dict): if isinstance(cursors, dict):
@@ -852,7 +852,7 @@ class MochatChannel(BaseChannel):
"cursors": self._session_cursor, "cursors": self._session_cursor,
}, ensure_ascii=False, indent=2) + "\n", "utf-8") }, ensure_ascii=False, indent=2) + "\n", "utf-8")
except Exception as e: except Exception as e:
logger.warning(f"Failed to save Mochat cursor file: {e}") logger.warning("Failed to save Mochat cursor file: {}", e)
# ---- HTTP helpers ------------------------------------------------------ # ---- HTTP helpers ------------------------------------------------------

View File

@@ -34,7 +34,7 @@ def _make_bot_class(channel: "QQChannel") -> "type[botpy.Client]":
super().__init__(intents=intents) super().__init__(intents=intents)
async def on_ready(self): async def on_ready(self):
logger.info(f"QQ bot ready: {self.robot.name}") logger.info("QQ bot ready: {}", self.robot.name)
async def on_c2c_message_create(self, message: "C2CMessage"): async def on_c2c_message_create(self, message: "C2CMessage"):
await channel._on_message(message) await channel._on_message(message)
@@ -80,7 +80,7 @@ class QQChannel(BaseChannel):
try: try:
await self._client.start(appid=self.config.app_id, secret=self.config.secret) await self._client.start(appid=self.config.app_id, secret=self.config.secret)
except Exception as e: except Exception as e:
logger.warning(f"QQ bot error: {e}") logger.warning("QQ bot error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting QQ bot in 5 seconds...") logger.info("Reconnecting QQ bot in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
@@ -108,7 +108,7 @@ class QQChannel(BaseChannel):
content=msg.content, content=msg.content,
) )
except Exception as e: except Exception as e:
logger.error(f"Error sending QQ message: {e}") logger.error("Error sending QQ message: {}", e)
async def _on_message(self, data: "C2CMessage") -> None: async def _on_message(self, data: "C2CMessage") -> None:
"""Handle incoming message from QQ.""" """Handle incoming message from QQ."""
@@ -131,4 +131,4 @@ class QQChannel(BaseChannel):
metadata={"message_id": data.id}, metadata={"message_id": data.id},
) )
except Exception as e: except Exception as e:
logger.error(f"Error handling QQ message: {e}") logger.error("Error handling QQ message: {}", e)

View File

@@ -10,6 +10,8 @@ from slack_sdk.socket_mode.request import SocketModeRequest
from slack_sdk.socket_mode.response import SocketModeResponse from slack_sdk.socket_mode.response import SocketModeResponse
from slack_sdk.web.async_client import AsyncWebClient from slack_sdk.web.async_client import AsyncWebClient
from slackify_markdown import slackify_markdown
from nanobot.bus.events import OutboundMessage from nanobot.bus.events import OutboundMessage
from nanobot.bus.queue import MessageBus from nanobot.bus.queue import MessageBus
from nanobot.channels.base import BaseChannel from nanobot.channels.base import BaseChannel
@@ -34,7 +36,7 @@ class SlackChannel(BaseChannel):
logger.error("Slack bot/app token not configured") logger.error("Slack bot/app token not configured")
return return
if self.config.mode != "socket": if self.config.mode != "socket":
logger.error(f"Unsupported Slack mode: {self.config.mode}") logger.error("Unsupported Slack mode: {}", self.config.mode)
return return
self._running = True self._running = True
@@ -51,9 +53,9 @@ class SlackChannel(BaseChannel):
try: try:
auth = await self._web_client.auth_test() auth = await self._web_client.auth_test()
self._bot_user_id = auth.get("user_id") self._bot_user_id = auth.get("user_id")
logger.info(f"Slack bot connected as {self._bot_user_id}") logger.info("Slack bot connected as {}", self._bot_user_id)
except Exception as e: except Exception as e:
logger.warning(f"Slack auth_test failed: {e}") logger.warning("Slack auth_test failed: {}", e)
logger.info("Starting Slack Socket Mode client...") logger.info("Starting Slack Socket Mode client...")
await self._socket_client.connect() await self._socket_client.connect()
@@ -68,7 +70,7 @@ class SlackChannel(BaseChannel):
try: try:
await self._socket_client.close() await self._socket_client.close()
except Exception as e: except Exception as e:
logger.warning(f"Slack socket close failed: {e}") logger.warning("Slack socket close failed: {}", e)
self._socket_client = None self._socket_client = None
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
@@ -84,11 +86,11 @@ class SlackChannel(BaseChannel):
use_thread = thread_ts and channel_type != "im" use_thread = thread_ts and channel_type != "im"
await self._web_client.chat_postMessage( await self._web_client.chat_postMessage(
channel=msg.chat_id, channel=msg.chat_id,
text=msg.content or "", text=self._to_mrkdwn(msg.content),
thread_ts=thread_ts if use_thread else None, thread_ts=thread_ts if use_thread else None,
) )
except Exception as e: except Exception as e:
logger.error(f"Error sending Slack message: {e}") logger.error("Error sending Slack message: {}", e)
async def _on_socket_request( async def _on_socket_request(
self, self,
@@ -150,17 +152,19 @@ class SlackChannel(BaseChannel):
text = self._strip_bot_mention(text) text = self._strip_bot_mention(text)
thread_ts = event.get("thread_ts") or event.get("ts") thread_ts = event.get("thread_ts")
if self.config.reply_in_thread and not thread_ts:
thread_ts = event.get("ts")
# Add :eyes: reaction to the triggering message (best-effort) # Add :eyes: reaction to the triggering message (best-effort)
try: try:
if self._web_client and event.get("ts"): if self._web_client and event.get("ts"):
await self._web_client.reactions_add( await self._web_client.reactions_add(
channel=chat_id, channel=chat_id,
name="eyes", name=self.config.react_emoji,
timestamp=event.get("ts"), timestamp=event.get("ts"),
) )
except Exception as e: except Exception as e:
logger.debug(f"Slack reactions_add failed: {e}") logger.debug("Slack reactions_add failed: {}", e)
await self._handle_message( await self._handle_message(
sender_id=sender_id, sender_id=sender_id,
@@ -203,3 +207,31 @@ class SlackChannel(BaseChannel):
if not text or not self._bot_user_id: if not text or not self._bot_user_id:
return text return text
return re.sub(rf"<@{re.escape(self._bot_user_id)}>\s*", "", text).strip() return re.sub(rf"<@{re.escape(self._bot_user_id)}>\s*", "", text).strip()
_TABLE_RE = re.compile(r"(?m)^\|.*\|$(?:\n\|[\s:|-]*\|$)(?:\n\|.*\|$)*")
@classmethod
def _to_mrkdwn(cls, text: str) -> str:
"""Convert Markdown to Slack mrkdwn, including tables."""
if not text:
return ""
text = cls._TABLE_RE.sub(cls._convert_table, text)
return slackify_markdown(text)
@staticmethod
def _convert_table(match: re.Match) -> str:
"""Convert a Markdown table to a Slack-readable list."""
lines = [ln.strip() for ln in match.group(0).strip().splitlines() if ln.strip()]
if len(lines) < 2:
return match.group(0)
headers = [h.strip() for h in lines[0].strip("|").split("|")]
start = 2 if re.fullmatch(r"[|\s:\-]+", lines[1]) else 1
rows: list[str] = []
for line in lines[start:]:
cells = [c.strip() for c in line.strip("|").split("|")]
cells = (cells + [""] * len(headers))[: len(headers)]
parts = [f"**{headers[i]}**: {cells[i]}" for i in range(len(headers)) if cells[i]]
if parts:
rows.append(" · ".join(parts))
return "\n".join(rows)

View File

@@ -78,6 +78,26 @@ def _markdown_to_telegram_html(text: str) -> str:
return text return text
def _split_message(content: str, max_len: int = 4000) -> list[str]:
"""Split content into chunks within max_len, preferring line breaks."""
if len(content) <= max_len:
return [content]
chunks: list[str] = []
while content:
if len(content) <= max_len:
chunks.append(content)
break
cut = content[:max_len]
pos = cut.rfind('\n')
if pos == -1:
pos = cut.rfind(' ')
if pos == -1:
pos = max_len
chunks.append(content[:pos])
content = content[pos:].lstrip()
return chunks
class TelegramChannel(BaseChannel): class TelegramChannel(BaseChannel):
""" """
Telegram channel using long polling. Telegram channel using long polling.
@@ -145,13 +165,13 @@ class TelegramChannel(BaseChannel):
# Get bot info and register command menu # Get bot info and register command menu
bot_info = await self._app.bot.get_me() bot_info = await self._app.bot.get_me()
logger.info(f"Telegram bot @{bot_info.username} connected") logger.info("Telegram bot @{} connected", bot_info.username)
try: try:
await self._app.bot.set_my_commands(self.BOT_COMMANDS) await self._app.bot.set_my_commands(self.BOT_COMMANDS)
logger.debug("Telegram bot commands registered") logger.debug("Telegram bot commands registered")
except Exception as e: except Exception as e:
logger.warning(f"Failed to register bot commands: {e}") logger.warning("Failed to register bot commands: {}", e)
# Start polling (this runs until stopped) # Start polling (this runs until stopped)
await self._app.updater.start_polling( await self._app.updater.start_polling(
@@ -178,37 +198,61 @@ class TelegramChannel(BaseChannel):
await self._app.shutdown() await self._app.shutdown()
self._app = None self._app = None
@staticmethod
def _get_media_type(path: str) -> str:
"""Guess media type from file extension."""
ext = path.rsplit(".", 1)[-1].lower() if "." in path else ""
if ext in ("jpg", "jpeg", "png", "gif", "webp"):
return "photo"
if ext == "ogg":
return "voice"
if ext in ("mp3", "m4a", "wav", "aac"):
return "audio"
return "document"
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
"""Send a message through Telegram.""" """Send a message through Telegram."""
if not self._app: if not self._app:
logger.warning("Telegram bot not running") logger.warning("Telegram bot not running")
return return
# Stop typing indicator for this chat
self._stop_typing(msg.chat_id) self._stop_typing(msg.chat_id)
try: try:
# chat_id should be the Telegram chat ID (integer)
chat_id = int(msg.chat_id) chat_id = int(msg.chat_id)
# Convert markdown to Telegram HTML
html_content = _markdown_to_telegram_html(msg.content)
await self._app.bot.send_message(
chat_id=chat_id,
text=html_content,
parse_mode="HTML"
)
except ValueError: except ValueError:
logger.error(f"Invalid chat_id: {msg.chat_id}") logger.error("Invalid chat_id: {}", msg.chat_id)
except Exception as e: return
# Fallback to plain text if HTML parsing fails
logger.warning(f"HTML parse failed, falling back to plain text: {e}") # Send media files
for media_path in (msg.media or []):
try: try:
await self._app.bot.send_message( media_type = self._get_media_type(media_path)
chat_id=int(msg.chat_id), sender = {
text=msg.content "photo": self._app.bot.send_photo,
) "voice": self._app.bot.send_voice,
except Exception as e2: "audio": self._app.bot.send_audio,
logger.error(f"Error sending Telegram message: {e2}") }.get(media_type, self._app.bot.send_document)
param = "photo" if media_type == "photo" else media_type if media_type in ("voice", "audio") else "document"
with open(media_path, 'rb') as f:
await sender(chat_id=chat_id, **{param: f})
except Exception as e:
filename = media_path.rsplit("/", 1)[-1]
logger.error("Failed to send media {}: {}", media_path, e)
await self._app.bot.send_message(chat_id=chat_id, text=f"[Failed to send: {filename}]")
# Send text content
if msg.content and msg.content != "[empty message]":
for chunk in _split_message(msg.content):
try:
html = _markdown_to_telegram_html(chunk)
await self._app.bot.send_message(chat_id=chat_id, text=html, parse_mode="HTML")
except Exception as e:
logger.warning("HTML parse failed, falling back to plain text: {}", e)
try:
await self._app.bot.send_message(chat_id=chat_id, text=chunk)
except Exception as e2:
logger.error("Error sending Telegram message: {}", e2)
async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle /start command.""" """Handle /start command."""
@@ -222,12 +266,18 @@ class TelegramChannel(BaseChannel):
"Type /help to see available commands." "Type /help to see available commands."
) )
@staticmethod
def _sender_id(user) -> str:
"""Build sender_id with username for allowlist matching."""
sid = str(user.id)
return f"{sid}|{user.username}" if user.username else sid
async def _forward_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: async def _forward_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Forward slash commands to the bus for unified handling in AgentLoop.""" """Forward slash commands to the bus for unified handling in AgentLoop."""
if not update.message or not update.effective_user: if not update.message or not update.effective_user:
return return
await self._handle_message( await self._handle_message(
sender_id=str(update.effective_user.id), sender_id=self._sender_id(update.effective_user),
chat_id=str(update.message.chat_id), chat_id=str(update.message.chat_id),
content=update.message.text, content=update.message.text,
) )
@@ -240,11 +290,7 @@ class TelegramChannel(BaseChannel):
message = update.message message = update.message
user = update.effective_user user = update.effective_user
chat_id = message.chat_id chat_id = message.chat_id
sender_id = self._sender_id(user)
# Use stable numeric ID, but keep username for allowlist compatibility
sender_id = str(user.id)
if user.username:
sender_id = f"{sender_id}|{user.username}"
# Store chat_id for replies # Store chat_id for replies
self._chat_ids[sender_id] = chat_id self._chat_ids[sender_id] = chat_id
@@ -298,21 +344,21 @@ class TelegramChannel(BaseChannel):
transcriber = GroqTranscriptionProvider(api_key=self.groq_api_key) transcriber = GroqTranscriptionProvider(api_key=self.groq_api_key)
transcription = await transcriber.transcribe(file_path) transcription = await transcriber.transcribe(file_path)
if transcription: if transcription:
logger.info(f"Transcribed {media_type}: {transcription[:50]}...") logger.info("Transcribed {}: {}...", media_type, transcription[:50])
content_parts.append(f"[transcription: {transcription}]") content_parts.append(f"[transcription: {transcription}]")
else: else:
content_parts.append(f"[{media_type}: {file_path}]") content_parts.append(f"[{media_type}: {file_path}]")
else: else:
content_parts.append(f"[{media_type}: {file_path}]") content_parts.append(f"[{media_type}: {file_path}]")
logger.debug(f"Downloaded {media_type} to {file_path}") logger.debug("Downloaded {} to {}", media_type, file_path)
except Exception as e: except Exception as e:
logger.error(f"Failed to download media: {e}") logger.error("Failed to download media: {}", e)
content_parts.append(f"[{media_type}: download failed]") content_parts.append(f"[{media_type}: download failed]")
content = "\n".join(content_parts) if content_parts else "[empty message]" content = "\n".join(content_parts) if content_parts else "[empty message]"
logger.debug(f"Telegram message from {sender_id}: {content[:50]}...") logger.debug("Telegram message from {}: {}...", sender_id, content[:50])
str_chat_id = str(chat_id) str_chat_id = str(chat_id)
@@ -355,11 +401,11 @@ class TelegramChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass
except Exception as e: except Exception as e:
logger.debug(f"Typing indicator stopped for {chat_id}: {e}") logger.debug("Typing indicator stopped for {}: {}", chat_id, e)
async def _on_error(self, update: object, context: ContextTypes.DEFAULT_TYPE) -> None: async def _on_error(self, update: object, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Log polling / handler errors instead of silently swallowing them.""" """Log polling / handler errors instead of silently swallowing them."""
logger.error(f"Telegram error: {context.error}") logger.error("Telegram error: {}", context.error)
def _get_extension(self, media_type: str, mime_type: str | None) -> str: def _get_extension(self, media_type: str, mime_type: str | None) -> str:
"""Get file extension based on media type.""" """Get file extension based on media type."""

View File

@@ -34,7 +34,7 @@ class WhatsAppChannel(BaseChannel):
bridge_url = self.config.bridge_url bridge_url = self.config.bridge_url
logger.info(f"Connecting to WhatsApp bridge at {bridge_url}...") logger.info("Connecting to WhatsApp bridge at {}...", bridge_url)
self._running = True self._running = True
@@ -53,14 +53,14 @@ class WhatsAppChannel(BaseChannel):
try: try:
await self._handle_bridge_message(message) await self._handle_bridge_message(message)
except Exception as e: except Exception as e:
logger.error(f"Error handling bridge message: {e}") logger.error("Error handling bridge message: {}", e)
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
self._connected = False self._connected = False
self._ws = None self._ws = None
logger.warning(f"WhatsApp bridge connection error: {e}") logger.warning("WhatsApp bridge connection error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting in 5 seconds...") logger.info("Reconnecting in 5 seconds...")
@@ -87,16 +87,16 @@ class WhatsAppChannel(BaseChannel):
"to": msg.chat_id, "to": msg.chat_id,
"text": msg.content "text": msg.content
} }
await self._ws.send(json.dumps(payload)) await self._ws.send(json.dumps(payload, ensure_ascii=False))
except Exception as e: except Exception as e:
logger.error(f"Error sending WhatsApp message: {e}") logger.error("Error sending WhatsApp message: {}", e)
async def _handle_bridge_message(self, raw: str) -> None: async def _handle_bridge_message(self, raw: str) -> None:
"""Handle a message from the bridge.""" """Handle a message from the bridge."""
try: try:
data = json.loads(raw) data = json.loads(raw)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning(f"Invalid JSON from bridge: {raw[:100]}") logger.warning("Invalid JSON from bridge: {}", raw[:100])
return return
msg_type = data.get("type") msg_type = data.get("type")
@@ -112,11 +112,11 @@ class WhatsAppChannel(BaseChannel):
# Extract just the phone number or lid as chat_id # Extract just the phone number or lid as chat_id
user_id = pn if pn else sender user_id = pn if pn else sender
sender_id = user_id.split("@")[0] if "@" in user_id else user_id sender_id = user_id.split("@")[0] if "@" in user_id else user_id
logger.info(f"Sender {sender}") logger.info("Sender {}", sender)
# Handle voice transcription if it's a voice message # Handle voice transcription if it's a voice message
if content == "[Voice Message]": if content == "[Voice Message]":
logger.info(f"Voice message received from {sender_id}, but direct download from bridge is not yet supported.") logger.info("Voice message received from {}, but direct download from bridge is not yet supported.", sender_id)
content = "[Voice Message: Transcription not available for WhatsApp yet]" content = "[Voice Message: Transcription not available for WhatsApp yet]"
await self._handle_message( await self._handle_message(
@@ -133,7 +133,7 @@ class WhatsAppChannel(BaseChannel):
elif msg_type == "status": elif msg_type == "status":
# Connection status update # Connection status update
status = data.get("status") status = data.get("status")
logger.info(f"WhatsApp status: {status}") logger.info("WhatsApp status: {}", status)
if status == "connected": if status == "connected":
self._connected = True self._connected = True
@@ -145,4 +145,4 @@ class WhatsAppChannel(BaseChannel):
logger.info("Scan QR code in the bridge terminal to connect WhatsApp") logger.info("Scan QR code in the bridge terminal to connect WhatsApp")
elif msg_type == "error": elif msg_type == "error":
logger.error(f"WhatsApp bridge error: {data.get('error')}") logger.error("WhatsApp bridge error: {}", data.get('error'))

View File

@@ -19,6 +19,7 @@ from prompt_toolkit.history import FileHistory
from prompt_toolkit.patch_stdout import patch_stdout from prompt_toolkit.patch_stdout import patch_stdout
from nanobot import __version__, __logo__ from nanobot import __version__, __logo__
from nanobot.config.schema import Config
app = typer.Typer( app = typer.Typer(
name="nanobot", name="nanobot",
@@ -242,7 +243,7 @@ Information about the user goes here.
for filename, content in templates.items(): for filename, content in templates.items():
file_path = workspace / filename file_path = workspace / filename
if not file_path.exists(): if not file_path.exists():
file_path.write_text(content) file_path.write_text(content, encoding="utf-8")
console.print(f" [dim]Created {filename}[/dim]") console.print(f" [dim]Created {filename}[/dim]")
# Create memory directory and MEMORY.md # Create memory directory and MEMORY.md
@@ -265,12 +266,12 @@ This file stores important information that should persist across sessions.
## Important Notes ## Important Notes
(Things to remember) (Things to remember)
""") """, encoding="utf-8")
console.print(" [dim]Created memory/MEMORY.md[/dim]") console.print(" [dim]Created memory/MEMORY.md[/dim]")
history_file = memory_dir / "HISTORY.md" history_file = memory_dir / "HISTORY.md"
if not history_file.exists(): if not history_file.exists():
history_file.write_text("") history_file.write_text("", encoding="utf-8")
console.print(" [dim]Created memory/HISTORY.md[/dim]") console.print(" [dim]Created memory/HISTORY.md[/dim]")
# Create skills directory for custom user skills # Create skills directory for custom user skills
@@ -278,21 +279,41 @@ This file stores important information that should persist across sessions.
skills_dir.mkdir(exist_ok=True) skills_dir.mkdir(exist_ok=True)
def _make_provider(config): def _make_provider(config: Config):
"""Create LiteLLMProvider from config. Exits if no API key found.""" """Create the appropriate LLM provider from config."""
from nanobot.providers.litellm_provider import LiteLLMProvider from nanobot.providers.litellm_provider import LiteLLMProvider
p = config.get_provider() from nanobot.providers.openai_codex_provider import OpenAICodexProvider
from nanobot.providers.custom_provider import CustomProvider
model = config.agents.defaults.model model = config.agents.defaults.model
if not (p and p.api_key) and not model.startswith("bedrock/"): provider_name = config.get_provider_name(model)
p = config.get_provider(model)
# OpenAI Codex (OAuth)
if provider_name == "openai_codex" or model.startswith("openai-codex/"):
return OpenAICodexProvider(default_model=model)
# Custom: direct OpenAI-compatible endpoint, bypasses LiteLLM
if provider_name == "custom":
return CustomProvider(
api_key=p.api_key if p else "no-key",
api_base=config.get_api_base(model) or "http://localhost:8000/v1",
default_model=model,
)
from nanobot.providers.registry import find_by_name
spec = find_by_name(provider_name)
if not model.startswith("bedrock/") and not (p and p.api_key) and not (spec and spec.is_oauth):
console.print("[red]Error: No API key configured.[/red]") console.print("[red]Error: No API key configured.[/red]")
console.print("Set one in ~/.nanobot/config.json under providers section") console.print("Set one in ~/.nanobot/config.json under providers section")
raise typer.Exit(1) raise typer.Exit(1)
return LiteLLMProvider( return LiteLLMProvider(
api_key=p.api_key if p else None, api_key=p.api_key if p else None,
api_base=config.get_api_base(), api_base=config.get_api_base(model),
default_model=model, default_model=model,
extra_headers=p.extra_headers if p else None, extra_headers=p.extra_headers if p else None,
provider_name=config.get_provider_name(), provider_name=provider_name,
) )
@@ -429,9 +450,10 @@ def agent(
logs: bool = typer.Option(False, "--logs/--no-logs", help="Show nanobot runtime logs during chat"), logs: bool = typer.Option(False, "--logs/--no-logs", help="Show nanobot runtime logs during chat"),
): ):
"""Interact with the agent directly.""" """Interact with the agent directly."""
from nanobot.config.loader import load_config from nanobot.config.loader import load_config, get_data_dir
from nanobot.bus.queue import MessageBus from nanobot.bus.queue import MessageBus
from nanobot.agent.loop import AgentLoop from nanobot.agent.loop import AgentLoop
from nanobot.cron.service import CronService
from loguru import logger from loguru import logger
config = load_config() config = load_config()
@@ -439,6 +461,10 @@ def agent(
bus = MessageBus() bus = MessageBus()
provider = _make_provider(config) provider = _make_provider(config)
# Create cron service for tool usage (no callback needed for CLI unless running)
cron_store_path = get_data_dir() / "cron" / "jobs.json"
cron = CronService(cron_store_path)
if logs: if logs:
logger.enable("nanobot") logger.enable("nanobot")
else: else:
@@ -455,6 +481,7 @@ def agent(
memory_window=config.agents.defaults.memory_window, memory_window=config.agents.defaults.memory_window,
brave_api_key=config.tools.web.search.api_key or None, brave_api_key=config.tools.web.search.api_key or None,
exec_config=config.tools.exec, exec_config=config.tools.exec,
cron_service=cron,
restrict_to_workspace=config.tools.restrict_to_workspace, restrict_to_workspace=config.tools.restrict_to_workspace,
mcp_servers=config.tools.mcp_servers, mcp_servers=config.tools.mcp_servers,
) )
@@ -467,11 +494,14 @@ def agent(
# Animated spinner is safe to use with prompt_toolkit input handling # Animated spinner is safe to use with prompt_toolkit input handling
return console.status("[dim]nanobot is thinking...[/dim]", spinner="dots") return console.status("[dim]nanobot is thinking...[/dim]", spinner="dots")
async def _cli_progress(content: str) -> None:
console.print(f" [dim]↳ {content}[/dim]")
if message: if message:
# Single message mode # Single message mode
async def run_once(): async def run_once():
with _thinking_ctx(): with _thinking_ctx():
response = await agent_loop.process_direct(message, session_id) response = await agent_loop.process_direct(message, session_id, on_progress=_cli_progress)
_print_agent_response(response, render_markdown=markdown) _print_agent_response(response, render_markdown=markdown)
await agent_loop.close_mcp() await agent_loop.close_mcp()
@@ -504,7 +534,7 @@ def agent(
break break
with _thinking_ctx(): with _thinking_ctx():
response = await agent_loop.process_direct(user_input, session_id) response = await agent_loop.process_direct(user_input, session_id, on_progress=_cli_progress)
_print_agent_response(response, render_markdown=markdown) _print_agent_response(response, render_markdown=markdown)
except KeyboardInterrupt: except KeyboardInterrupt:
_restore_terminal() _restore_terminal()
@@ -710,20 +740,26 @@ def cron_list(
table.add_column("Next Run") table.add_column("Next Run")
import time import time
from datetime import datetime as _dt
from zoneinfo import ZoneInfo
for job in jobs: for job in jobs:
# Format schedule # Format schedule
if job.schedule.kind == "every": if job.schedule.kind == "every":
sched = f"every {(job.schedule.every_ms or 0) // 1000}s" sched = f"every {(job.schedule.every_ms or 0) // 1000}s"
elif job.schedule.kind == "cron": elif job.schedule.kind == "cron":
sched = job.schedule.expr or "" sched = f"{job.schedule.expr or ''} ({job.schedule.tz})" if job.schedule.tz else (job.schedule.expr or "")
else: else:
sched = "one-time" sched = "one-time"
# Format next run # Format next run
next_run = "" next_run = ""
if job.state.next_run_at_ms: if job.state.next_run_at_ms:
next_time = time.strftime("%Y-%m-%d %H:%M", time.localtime(job.state.next_run_at_ms / 1000)) ts = job.state.next_run_at_ms / 1000
next_run = next_time try:
tz = ZoneInfo(job.schedule.tz) if job.schedule.tz else None
next_run = _dt.fromtimestamp(ts, tz).strftime("%Y-%m-%d %H:%M")
except Exception:
next_run = time.strftime("%Y-%m-%d %H:%M", time.localtime(ts))
status = "[green]enabled[/green]" if job.enabled else "[dim]disabled[/dim]" status = "[green]enabled[/green]" if job.enabled else "[dim]disabled[/dim]"
@@ -738,6 +774,7 @@ def cron_add(
message: str = typer.Option(..., "--message", "-m", help="Message for agent"), message: str = typer.Option(..., "--message", "-m", help="Message for agent"),
every: int = typer.Option(None, "--every", "-e", help="Run every N seconds"), every: int = typer.Option(None, "--every", "-e", help="Run every N seconds"),
cron_expr: str = typer.Option(None, "--cron", "-c", help="Cron expression (e.g. '0 9 * * *')"), cron_expr: str = typer.Option(None, "--cron", "-c", help="Cron expression (e.g. '0 9 * * *')"),
tz: str | None = typer.Option(None, "--tz", help="IANA timezone for cron (e.g. 'America/Vancouver')"),
at: str = typer.Option(None, "--at", help="Run once at time (ISO format)"), at: str = typer.Option(None, "--at", help="Run once at time (ISO format)"),
deliver: bool = typer.Option(False, "--deliver", "-d", help="Deliver response to channel"), deliver: bool = typer.Option(False, "--deliver", "-d", help="Deliver response to channel"),
to: str = typer.Option(None, "--to", help="Recipient for delivery"), to: str = typer.Option(None, "--to", help="Recipient for delivery"),
@@ -748,11 +785,15 @@ def cron_add(
from nanobot.cron.service import CronService from nanobot.cron.service import CronService
from nanobot.cron.types import CronSchedule from nanobot.cron.types import CronSchedule
if tz and not cron_expr:
console.print("[red]Error: --tz can only be used with --cron[/red]")
raise typer.Exit(1)
# Determine schedule type # Determine schedule type
if every: if every:
schedule = CronSchedule(kind="every", every_ms=every * 1000) schedule = CronSchedule(kind="every", every_ms=every * 1000)
elif cron_expr: elif cron_expr:
schedule = CronSchedule(kind="cron", expr=cron_expr) schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
elif at: elif at:
import datetime import datetime
dt = datetime.datetime.fromisoformat(at) dt = datetime.datetime.fromisoformat(at)
@@ -764,15 +805,19 @@ def cron_add(
store_path = get_data_dir() / "cron" / "jobs.json" store_path = get_data_dir() / "cron" / "jobs.json"
service = CronService(store_path) service = CronService(store_path)
job = service.add_job( try:
name=name, job = service.add_job(
schedule=schedule, name=name,
message=message, schedule=schedule,
deliver=deliver, message=message,
to=to, deliver=deliver,
channel=channel, to=to,
) channel=channel,
)
except ValueError as e:
console.print(f"[red]Error: {e}[/red]")
raise typer.Exit(1) from e
console.print(f"[green]✓[/green] Added job '{job.name}' ({job.id})") console.print(f"[green]✓[/green] Added job '{job.name}' ({job.id})")
@@ -863,7 +908,9 @@ def status():
p = getattr(config.providers, spec.name, None) p = getattr(config.providers, spec.name, None)
if p is None: if p is None:
continue continue
if spec.is_local: if spec.is_oauth:
console.print(f"{spec.label}: [green]✓ (OAuth)[/green]")
elif spec.is_local:
# Local deployments show api_base instead of api_key # Local deployments show api_base instead of api_key
if p.api_base: if p.api_base:
console.print(f"{spec.label}: [green]✓ {p.api_base}[/green]") console.print(f"{spec.label}: [green]✓ {p.api_base}[/green]")
@@ -874,5 +921,88 @@ def status():
console.print(f"{spec.label}: {'[green]✓[/green]' if has_key else '[dim]not set[/dim]'}") console.print(f"{spec.label}: {'[green]✓[/green]' if has_key else '[dim]not set[/dim]'}")
# ============================================================================
# OAuth Login
# ============================================================================
provider_app = typer.Typer(help="Manage providers")
app.add_typer(provider_app, name="provider")
_LOGIN_HANDLERS: dict[str, callable] = {}
def _register_login(name: str):
def decorator(fn):
_LOGIN_HANDLERS[name] = fn
return fn
return decorator
@provider_app.command("login")
def provider_login(
provider: str = typer.Argument(..., help="OAuth provider (e.g. 'openai-codex', 'github-copilot')"),
):
"""Authenticate with an OAuth provider."""
from nanobot.providers.registry import PROVIDERS
key = provider.replace("-", "_")
spec = next((s for s in PROVIDERS if s.name == key and s.is_oauth), None)
if not spec:
names = ", ".join(s.name.replace("_", "-") for s in PROVIDERS if s.is_oauth)
console.print(f"[red]Unknown OAuth provider: {provider}[/red] Supported: {names}")
raise typer.Exit(1)
handler = _LOGIN_HANDLERS.get(spec.name)
if not handler:
console.print(f"[red]Login not implemented for {spec.label}[/red]")
raise typer.Exit(1)
console.print(f"{__logo__} OAuth Login - {spec.label}\n")
handler()
@_register_login("openai_codex")
def _login_openai_codex() -> None:
try:
from oauth_cli_kit import get_token, login_oauth_interactive
token = None
try:
token = get_token()
except Exception:
pass
if not (token and token.access):
console.print("[cyan]Starting interactive OAuth login...[/cyan]\n")
token = login_oauth_interactive(
print_fn=lambda s: console.print(s),
prompt_fn=lambda s: typer.prompt(s),
)
if not (token and token.access):
console.print("[red]✗ Authentication failed[/red]")
raise typer.Exit(1)
console.print(f"[green]✓ Authenticated with OpenAI Codex[/green] [dim]{token.account_id}[/dim]")
except ImportError:
console.print("[red]oauth_cli_kit not installed. Run: pip install oauth-cli-kit[/red]")
raise typer.Exit(1)
@_register_login("github_copilot")
def _login_github_copilot() -> None:
import asyncio
console.print("[cyan]Starting GitHub Copilot device flow...[/cyan]\n")
async def _trigger():
from litellm import acompletion
await acompletion(model="github_copilot/gpt-4o", messages=[{"role": "user", "content": "hi"}], max_tokens=1)
try:
asyncio.run(_trigger())
console.print("[green]✓ Authenticated with GitHub Copilot[/green]")
except Exception as e:
console.print(f"[red]Authentication error: {e}[/red]")
raise typer.Exit(1)
if __name__ == "__main__": if __name__ == "__main__":
app() app()

View File

@@ -2,7 +2,6 @@
import json import json
from pathlib import Path from pathlib import Path
from typing import Any
from nanobot.config.schema import Config from nanobot.config.schema import Config
@@ -21,45 +20,43 @@ def get_data_dir() -> Path:
def load_config(config_path: Path | None = None) -> Config: def load_config(config_path: Path | None = None) -> Config:
""" """
Load configuration from file or create default. Load configuration from file or create default.
Args: Args:
config_path: Optional path to config file. Uses default if not provided. config_path: Optional path to config file. Uses default if not provided.
Returns: Returns:
Loaded configuration object. Loaded configuration object.
""" """
path = config_path or get_config_path() path = config_path or get_config_path()
if path.exists(): if path.exists():
try: try:
with open(path) as f: with open(path, encoding="utf-8") as f:
data = json.load(f) data = json.load(f)
data = _migrate_config(data) data = _migrate_config(data)
return Config.model_validate(convert_keys(data)) return Config.model_validate(data)
except (json.JSONDecodeError, ValueError) as e: except (json.JSONDecodeError, ValueError) as e:
print(f"Warning: Failed to load config from {path}: {e}") print(f"Warning: Failed to load config from {path}: {e}")
print("Using default configuration.") print("Using default configuration.")
return Config() return Config()
def save_config(config: Config, config_path: Path | None = None) -> None: def save_config(config: Config, config_path: Path | None = None) -> None:
""" """
Save configuration to file. Save configuration to file.
Args: Args:
config: Configuration to save. config: Configuration to save.
config_path: Optional path to save to. Uses default if not provided. config_path: Optional path to save to. Uses default if not provided.
""" """
path = config_path or get_config_path() path = config_path or get_config_path()
path.parent.mkdir(parents=True, exist_ok=True) path.parent.mkdir(parents=True, exist_ok=True)
# Convert to camelCase format data = config.model_dump(by_alias=True)
data = config.model_dump()
data = convert_to_camel(data) with open(path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
with open(path, "w") as f:
json.dump(data, f, indent=2)
def _migrate_config(data: dict) -> dict: def _migrate_config(data: dict) -> dict:
@@ -70,37 +67,3 @@ def _migrate_config(data: dict) -> dict:
if "restrictToWorkspace" in exec_cfg and "restrictToWorkspace" not in tools: if "restrictToWorkspace" in exec_cfg and "restrictToWorkspace" not in tools:
tools["restrictToWorkspace"] = exec_cfg.pop("restrictToWorkspace") tools["restrictToWorkspace"] = exec_cfg.pop("restrictToWorkspace")
return data return data
def convert_keys(data: Any) -> Any:
"""Convert camelCase keys to snake_case for Pydantic."""
if isinstance(data, dict):
return {camel_to_snake(k): convert_keys(v) for k, v in data.items()}
if isinstance(data, list):
return [convert_keys(item) for item in data]
return data
def convert_to_camel(data: Any) -> Any:
"""Convert snake_case keys to camelCase."""
if isinstance(data, dict):
return {snake_to_camel(k): convert_to_camel(v) for k, v in data.items()}
if isinstance(data, list):
return [convert_to_camel(item) for item in data]
return data
def camel_to_snake(name: str) -> str:
"""Convert camelCase to snake_case."""
result = []
for i, char in enumerate(name):
if char.isupper() and i > 0:
result.append("_")
result.append(char.lower())
return "".join(result)
def snake_to_camel(name: str) -> str:
"""Convert snake_case to camelCase."""
components = name.split("_")
return components[0] + "".join(x.title() for x in components[1:])

View File

@@ -2,27 +2,37 @@
from pathlib import Path from pathlib import Path
from pydantic import BaseModel, Field, ConfigDict from pydantic import BaseModel, Field, ConfigDict
from pydantic.alias_generators import to_camel
from pydantic_settings import BaseSettings from pydantic_settings import BaseSettings
class WhatsAppConfig(BaseModel): class Base(BaseModel):
"""Base model that accepts both camelCase and snake_case keys."""
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
class WhatsAppConfig(Base):
"""WhatsApp channel configuration.""" """WhatsApp channel configuration."""
enabled: bool = False enabled: bool = False
bridge_url: str = "ws://localhost:3001" bridge_url: str = "ws://localhost:3001"
bridge_token: str = "" # Shared token for bridge auth (optional, recommended) bridge_token: str = "" # Shared token for bridge auth (optional, recommended)
allow_from: list[str] = Field(default_factory=list) # Allowed phone numbers allow_from: list[str] = Field(default_factory=list) # Allowed phone numbers
class TelegramConfig(BaseModel): class TelegramConfig(Base):
"""Telegram channel configuration.""" """Telegram channel configuration."""
enabled: bool = False enabled: bool = False
token: str = "" # Bot token from @BotFather token: str = "" # Bot token from @BotFather
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs or usernames allow_from: list[str] = Field(default_factory=list) # Allowed user IDs or usernames
proxy: str | None = None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080" proxy: str | None = None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080"
class FeishuConfig(BaseModel): class FeishuConfig(Base):
"""Feishu/Lark channel configuration using WebSocket long connection.""" """Feishu/Lark channel configuration using WebSocket long connection."""
enabled: bool = False enabled: bool = False
app_id: str = "" # App ID from Feishu Open Platform app_id: str = "" # App ID from Feishu Open Platform
app_secret: str = "" # App Secret from Feishu Open Platform app_secret: str = "" # App Secret from Feishu Open Platform
@@ -31,24 +41,28 @@ class FeishuConfig(BaseModel):
allow_from: list[str] = Field(default_factory=list) # Allowed user open_ids allow_from: list[str] = Field(default_factory=list) # Allowed user open_ids
class DingTalkConfig(BaseModel): class DingTalkConfig(Base):
"""DingTalk channel configuration using Stream mode.""" """DingTalk channel configuration using Stream mode."""
enabled: bool = False enabled: bool = False
client_id: str = "" # AppKey client_id: str = "" # AppKey
client_secret: str = "" # AppSecret client_secret: str = "" # AppSecret
allow_from: list[str] = Field(default_factory=list) # Allowed staff_ids allow_from: list[str] = Field(default_factory=list) # Allowed staff_ids
class DiscordConfig(BaseModel): class DiscordConfig(Base):
"""Discord channel configuration.""" """Discord channel configuration."""
enabled: bool = False enabled: bool = False
token: str = "" # Bot token from Discord Developer Portal token: str = "" # Bot token from Discord Developer Portal
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs allow_from: list[str] = Field(default_factory=list) # Allowed user IDs
gateway_url: str = "wss://gateway.discord.gg/?v=10&encoding=json" gateway_url: str = "wss://gateway.discord.gg/?v=10&encoding=json"
intents: int = 37377 # GUILDS + GUILD_MESSAGES + DIRECT_MESSAGES + MESSAGE_CONTENT intents: int = 37377 # GUILDS + GUILD_MESSAGES + DIRECT_MESSAGES + MESSAGE_CONTENT
class EmailConfig(BaseModel):
class EmailConfig(Base):
"""Email channel configuration (IMAP inbound + SMTP outbound).""" """Email channel configuration (IMAP inbound + SMTP outbound)."""
enabled: bool = False enabled: bool = False
consent_granted: bool = False # Explicit owner permission to access mailbox data consent_granted: bool = False # Explicit owner permission to access mailbox data
@@ -78,18 +92,21 @@ class EmailConfig(BaseModel):
allow_from: list[str] = Field(default_factory=list) # Allowed sender email addresses allow_from: list[str] = Field(default_factory=list) # Allowed sender email addresses
class MochatMentionConfig(BaseModel): class MochatMentionConfig(Base):
"""Mochat mention behavior configuration.""" """Mochat mention behavior configuration."""
require_in_groups: bool = False require_in_groups: bool = False
class MochatGroupRule(BaseModel): class MochatGroupRule(Base):
"""Mochat per-group mention requirement.""" """Mochat per-group mention requirement."""
require_mention: bool = False require_mention: bool = False
class MochatConfig(BaseModel): class MochatConfig(Base):
"""Mochat channel configuration.""" """Mochat channel configuration."""
enabled: bool = False enabled: bool = False
base_url: str = "https://mochat.io" base_url: str = "https://mochat.io"
socket_url: str = "" socket_url: str = ""
@@ -114,36 +131,42 @@ class MochatConfig(BaseModel):
reply_delay_ms: int = 120000 reply_delay_ms: int = 120000
class SlackDMConfig(BaseModel): class SlackDMConfig(Base):
"""Slack DM policy configuration.""" """Slack DM policy configuration."""
enabled: bool = True enabled: bool = True
policy: str = "open" # "open" or "allowlist" policy: str = "open" # "open" or "allowlist"
allow_from: list[str] = Field(default_factory=list) # Allowed Slack user IDs allow_from: list[str] = Field(default_factory=list) # Allowed Slack user IDs
class SlackConfig(BaseModel): class SlackConfig(Base):
"""Slack channel configuration.""" """Slack channel configuration."""
enabled: bool = False enabled: bool = False
mode: str = "socket" # "socket" supported mode: str = "socket" # "socket" supported
webhook_path: str = "/slack/events" webhook_path: str = "/slack/events"
bot_token: str = "" # xoxb-... bot_token: str = "" # xoxb-...
app_token: str = "" # xapp-... app_token: str = "" # xapp-...
user_token_read_only: bool = True user_token_read_only: bool = True
reply_in_thread: bool = True
react_emoji: str = "eyes"
group_policy: str = "mention" # "mention", "open", "allowlist" group_policy: str = "mention" # "mention", "open", "allowlist"
group_allow_from: list[str] = Field(default_factory=list) # Allowed channel IDs if allowlist group_allow_from: list[str] = Field(default_factory=list) # Allowed channel IDs if allowlist
dm: SlackDMConfig = Field(default_factory=SlackDMConfig) dm: SlackDMConfig = Field(default_factory=SlackDMConfig)
class QQConfig(BaseModel): class QQConfig(Base):
"""QQ channel configuration using botpy SDK.""" """QQ channel configuration using botpy SDK."""
enabled: bool = False enabled: bool = False
app_id: str = "" # 机器人 ID (AppID) from q.qq.com app_id: str = "" # 机器人 ID (AppID) from q.qq.com
secret: str = "" # 机器人密钥 (AppSecret) from q.qq.com secret: str = "" # 机器人密钥 (AppSecret) from q.qq.com
allow_from: list[str] = Field(default_factory=list) # Allowed user openids (empty = public access) allow_from: list[str] = Field(default_factory=list) # Allowed user openids (empty = public access)
class ChannelsConfig(BaseModel): class ChannelsConfig(Base):
"""Configuration for chat channels.""" """Configuration for chat channels."""
whatsapp: WhatsAppConfig = Field(default_factory=WhatsAppConfig) whatsapp: WhatsAppConfig = Field(default_factory=WhatsAppConfig)
telegram: TelegramConfig = Field(default_factory=TelegramConfig) telegram: TelegramConfig = Field(default_factory=TelegramConfig)
discord: DiscordConfig = Field(default_factory=DiscordConfig) discord: DiscordConfig = Field(default_factory=DiscordConfig)
@@ -155,8 +178,9 @@ class ChannelsConfig(BaseModel):
qq: QQConfig = Field(default_factory=QQConfig) qq: QQConfig = Field(default_factory=QQConfig)
class AgentDefaults(BaseModel): class AgentDefaults(Base):
"""Default agent configuration.""" """Default agent configuration."""
workspace: str = "~/.nanobot/workspace" workspace: str = "~/.nanobot/workspace"
model: str = "anthropic/claude-opus-4-5" model: str = "anthropic/claude-opus-4-5"
max_tokens: int = 8192 max_tokens: int = 8192
@@ -165,20 +189,23 @@ class AgentDefaults(BaseModel):
memory_window: int = 50 memory_window: int = 50
class AgentsConfig(BaseModel): class AgentsConfig(Base):
"""Agent configuration.""" """Agent configuration."""
defaults: AgentDefaults = Field(default_factory=AgentDefaults) defaults: AgentDefaults = Field(default_factory=AgentDefaults)
class ProviderConfig(BaseModel): class ProviderConfig(Base):
"""LLM provider configuration.""" """LLM provider configuration."""
api_key: str = "" api_key: str = ""
api_base: str | None = None api_base: str | None = None
extra_headers: dict[str, str] | None = None # Custom headers (e.g. APP-Code for AiHubMix) extra_headers: dict[str, str] | None = None # Custom headers (e.g. APP-Code for AiHubMix)
class ProvidersConfig(BaseModel): class ProvidersConfig(Base):
"""Configuration for LLM providers.""" """Configuration for LLM providers."""
custom: ProviderConfig = Field(default_factory=ProviderConfig) # Any OpenAI-compatible endpoint custom: ProviderConfig = Field(default_factory=ProviderConfig) # Any OpenAI-compatible endpoint
anthropic: ProviderConfig = Field(default_factory=ProviderConfig) anthropic: ProviderConfig = Field(default_factory=ProviderConfig)
openai: ProviderConfig = Field(default_factory=ProviderConfig) openai: ProviderConfig = Field(default_factory=ProviderConfig)
@@ -192,40 +219,49 @@ class ProvidersConfig(BaseModel):
moonshot: ProviderConfig = Field(default_factory=ProviderConfig) moonshot: ProviderConfig = Field(default_factory=ProviderConfig)
minimax: ProviderConfig = Field(default_factory=ProviderConfig) minimax: ProviderConfig = Field(default_factory=ProviderConfig)
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway
openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth)
github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth)
class GatewayConfig(BaseModel): class GatewayConfig(Base):
"""Gateway/server configuration.""" """Gateway/server configuration."""
host: str = "0.0.0.0" host: str = "0.0.0.0"
port: int = 18790 port: int = 18790
class WebSearchConfig(BaseModel): class WebSearchConfig(Base):
"""Web search tool configuration.""" """Web search tool configuration."""
api_key: str = "" # Brave Search API key api_key: str = "" # Brave Search API key
max_results: int = 5 max_results: int = 5
class WebToolsConfig(BaseModel): class WebToolsConfig(Base):
"""Web tools configuration.""" """Web tools configuration."""
search: WebSearchConfig = Field(default_factory=WebSearchConfig) search: WebSearchConfig = Field(default_factory=WebSearchConfig)
class ExecToolConfig(BaseModel): class ExecToolConfig(Base):
"""Shell exec tool configuration.""" """Shell exec tool configuration."""
timeout: int = 60 timeout: int = 60
class MCPServerConfig(BaseModel): class MCPServerConfig(Base):
"""MCP server connection configuration (stdio or HTTP).""" """MCP server connection configuration (stdio or HTTP)."""
command: str = "" # Stdio: command to run (e.g. "npx") command: str = "" # Stdio: command to run (e.g. "npx")
args: list[str] = Field(default_factory=list) # Stdio: command arguments args: list[str] = Field(default_factory=list) # Stdio: command arguments
env: dict[str, str] = Field(default_factory=dict) # Stdio: extra env vars env: dict[str, str] = Field(default_factory=dict) # Stdio: extra env vars
url: str = "" # HTTP: streamable HTTP endpoint URL url: str = "" # HTTP: streamable HTTP endpoint URL
class ToolsConfig(BaseModel): class ToolsConfig(Base):
"""Tools configuration.""" """Tools configuration."""
web: WebToolsConfig = Field(default_factory=WebToolsConfig) web: WebToolsConfig = Field(default_factory=WebToolsConfig)
exec: ExecToolConfig = Field(default_factory=ExecToolConfig) exec: ExecToolConfig = Field(default_factory=ExecToolConfig)
restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory
@@ -234,30 +270,50 @@ class ToolsConfig(BaseModel):
class Config(BaseSettings): class Config(BaseSettings):
"""Root configuration for nanobot.""" """Root configuration for nanobot."""
agents: AgentsConfig = Field(default_factory=AgentsConfig) agents: AgentsConfig = Field(default_factory=AgentsConfig)
channels: ChannelsConfig = Field(default_factory=ChannelsConfig) channels: ChannelsConfig = Field(default_factory=ChannelsConfig)
providers: ProvidersConfig = Field(default_factory=ProvidersConfig) providers: ProvidersConfig = Field(default_factory=ProvidersConfig)
gateway: GatewayConfig = Field(default_factory=GatewayConfig) gateway: GatewayConfig = Field(default_factory=GatewayConfig)
tools: ToolsConfig = Field(default_factory=ToolsConfig) tools: ToolsConfig = Field(default_factory=ToolsConfig)
@property @property
def workspace_path(self) -> Path: def workspace_path(self) -> Path:
"""Get expanded workspace path.""" """Get expanded workspace path."""
return Path(self.agents.defaults.workspace).expanduser() return Path(self.agents.defaults.workspace).expanduser()
def _match_provider(self, model: str | None = None) -> tuple["ProviderConfig | None", str | None]: def _match_provider(self, model: str | None = None) -> tuple["ProviderConfig | None", str | None]:
"""Match provider config and its registry name. Returns (config, spec_name).""" """Match provider config and its registry name. Returns (config, spec_name)."""
from nanobot.providers.registry import PROVIDERS from nanobot.providers.registry import PROVIDERS
model_lower = (model or self.agents.defaults.model).lower() model_lower = (model or self.agents.defaults.model).lower()
model_normalized = model_lower.replace("-", "_")
model_prefix = model_lower.split("/", 1)[0] if "/" in model_lower else ""
normalized_prefix = model_prefix.replace("-", "_")
def _kw_matches(kw: str) -> bool:
kw = kw.lower()
return kw in model_lower or kw.replace("-", "_") in model_normalized
# Explicit provider prefix wins — prevents `github-copilot/...codex` matching openai_codex.
for spec in PROVIDERS:
p = getattr(self.providers, spec.name, None)
if p and model_prefix and normalized_prefix == spec.name:
if spec.is_oauth or p.api_key:
return p, spec.name
# Match by keyword (order follows PROVIDERS registry) # Match by keyword (order follows PROVIDERS registry)
for spec in PROVIDERS: for spec in PROVIDERS:
p = getattr(self.providers, spec.name, None) p = getattr(self.providers, spec.name, None)
if p and any(kw in model_lower for kw in spec.keywords) and p.api_key: if p and any(_kw_matches(kw) for kw in spec.keywords):
return p, spec.name if spec.is_oauth or p.api_key:
return p, spec.name
# Fallback: gateways first, then others (follows registry order) # Fallback: gateways first, then others (follows registry order)
# OAuth providers are NOT valid fallbacks — they require explicit model selection
for spec in PROVIDERS: for spec in PROVIDERS:
if spec.is_oauth:
continue
p = getattr(self.providers, spec.name, None) p = getattr(self.providers, spec.name, None)
if p and p.api_key: if p and p.api_key:
return p, spec.name return p, spec.name
@@ -277,10 +333,11 @@ class Config(BaseSettings):
"""Get API key for the given model. Falls back to first available key.""" """Get API key for the given model. Falls back to first available key."""
p = self.get_provider(model) p = self.get_provider(model)
return p.api_key if p else None return p.api_key if p else None
def get_api_base(self, model: str | None = None) -> str | None: def get_api_base(self, model: str | None = None) -> str | None:
"""Get API base URL for the given model. Applies default URLs for known gateways.""" """Get API base URL for the given model. Applies default URLs for known gateways."""
from nanobot.providers.registry import find_by_name from nanobot.providers.registry import find_by_name
p, name = self._match_provider(model) p, name = self._match_provider(model)
if p and p.api_base: if p and p.api_base:
return p.api_base return p.api_base
@@ -292,8 +349,5 @@ class Config(BaseSettings):
if spec and spec.is_gateway and spec.default_api_base: if spec and spec.is_gateway and spec.default_api_base:
return spec.default_api_base return spec.default_api_base
return None return None
model_config = ConfigDict( model_config = ConfigDict(env_prefix="NANOBOT_", env_nested_delimiter="__")
env_prefix="NANOBOT_",
env_nested_delimiter="__"
)

View File

@@ -32,7 +32,8 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
try: try:
from croniter import croniter from croniter import croniter
from zoneinfo import ZoneInfo from zoneinfo import ZoneInfo
base_time = time.time() # Use caller-provided reference time for deterministic scheduling
base_time = now_ms / 1000
tz = ZoneInfo(schedule.tz) if schedule.tz else datetime.now().astimezone().tzinfo tz = ZoneInfo(schedule.tz) if schedule.tz else datetime.now().astimezone().tzinfo
base_dt = datetime.fromtimestamp(base_time, tz=tz) base_dt = datetime.fromtimestamp(base_time, tz=tz)
cron = croniter(schedule.expr, base_dt) cron = croniter(schedule.expr, base_dt)
@@ -44,6 +45,20 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
return None return None
def _validate_schedule_for_add(schedule: CronSchedule) -> None:
"""Validate schedule fields that would otherwise create non-runnable jobs."""
if schedule.tz and schedule.kind != "cron":
raise ValueError("tz can only be used with cron schedules")
if schedule.kind == "cron" and schedule.tz:
try:
from zoneinfo import ZoneInfo
ZoneInfo(schedule.tz)
except Exception:
raise ValueError(f"unknown timezone '{schedule.tz}'") from None
class CronService: class CronService:
"""Service for managing and executing scheduled jobs.""" """Service for managing and executing scheduled jobs."""
@@ -65,7 +80,7 @@ class CronService:
if self.store_path.exists(): if self.store_path.exists():
try: try:
data = json.loads(self.store_path.read_text()) data = json.loads(self.store_path.read_text(encoding="utf-8"))
jobs = [] jobs = []
for j in data.get("jobs", []): for j in data.get("jobs", []):
jobs.append(CronJob( jobs.append(CronJob(
@@ -98,7 +113,7 @@ class CronService:
)) ))
self._store = CronStore(jobs=jobs) self._store = CronStore(jobs=jobs)
except Exception as e: except Exception as e:
logger.warning(f"Failed to load cron store: {e}") logger.warning("Failed to load cron store: {}", e)
self._store = CronStore() self._store = CronStore()
else: else:
self._store = CronStore() self._store = CronStore()
@@ -147,7 +162,7 @@ class CronService:
] ]
} }
self.store_path.write_text(json.dumps(data, indent=2)) self.store_path.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
async def start(self) -> None: async def start(self) -> None:
"""Start the cron service.""" """Start the cron service."""
@@ -156,7 +171,7 @@ class CronService:
self._recompute_next_runs() self._recompute_next_runs()
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron service started with {len(self._store.jobs if self._store else [])} jobs") logger.info("Cron service started with {} jobs", len(self._store.jobs if self._store else []))
def stop(self) -> None: def stop(self) -> None:
"""Stop the cron service.""" """Stop the cron service."""
@@ -221,7 +236,7 @@ class CronService:
async def _execute_job(self, job: CronJob) -> None: async def _execute_job(self, job: CronJob) -> None:
"""Execute a single job.""" """Execute a single job."""
start_ms = _now_ms() start_ms = _now_ms()
logger.info(f"Cron: executing job '{job.name}' ({job.id})") logger.info("Cron: executing job '{}' ({})", job.name, job.id)
try: try:
response = None response = None
@@ -230,12 +245,12 @@ class CronService:
job.state.last_status = "ok" job.state.last_status = "ok"
job.state.last_error = None job.state.last_error = None
logger.info(f"Cron: job '{job.name}' completed") logger.info("Cron: job '{}' completed", job.name)
except Exception as e: except Exception as e:
job.state.last_status = "error" job.state.last_status = "error"
job.state.last_error = str(e) job.state.last_error = str(e)
logger.error(f"Cron: job '{job.name}' failed: {e}") logger.error("Cron: job '{}' failed: {}", job.name, e)
job.state.last_run_at_ms = start_ms job.state.last_run_at_ms = start_ms
job.updated_at_ms = _now_ms() job.updated_at_ms = _now_ms()
@@ -271,6 +286,7 @@ class CronService:
) -> CronJob: ) -> CronJob:
"""Add a new job.""" """Add a new job."""
store = self._load_store() store = self._load_store()
_validate_schedule_for_add(schedule)
now = _now_ms() now = _now_ms()
job = CronJob( job = CronJob(
@@ -295,7 +311,7 @@ class CronService:
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron: added job '{name}' ({job.id})") logger.info("Cron: added job '{}' ({})", name, job.id)
return job return job
def remove_job(self, job_id: str) -> bool: def remove_job(self, job_id: str) -> bool:
@@ -308,7 +324,7 @@ class CronService:
if removed: if removed:
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron: removed job {job_id}") logger.info("Cron: removed job {}", job_id)
return removed return removed

View File

@@ -65,7 +65,7 @@ class HeartbeatService:
"""Read HEARTBEAT.md content.""" """Read HEARTBEAT.md content."""
if self.heartbeat_file.exists(): if self.heartbeat_file.exists():
try: try:
return self.heartbeat_file.read_text() return self.heartbeat_file.read_text(encoding="utf-8")
except Exception: except Exception:
return None return None
return None return None
@@ -78,7 +78,7 @@ class HeartbeatService:
self._running = True self._running = True
self._task = asyncio.create_task(self._run_loop()) self._task = asyncio.create_task(self._run_loop())
logger.info(f"Heartbeat started (every {self.interval_s}s)") logger.info("Heartbeat started (every {}s)", self.interval_s)
def stop(self) -> None: def stop(self) -> None:
"""Stop the heartbeat service.""" """Stop the heartbeat service."""
@@ -97,7 +97,7 @@ class HeartbeatService:
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.error(f"Heartbeat error: {e}") logger.error("Heartbeat error: {}", e)
async def _tick(self) -> None: async def _tick(self) -> None:
"""Execute a single heartbeat tick.""" """Execute a single heartbeat tick."""
@@ -118,10 +118,10 @@ class HeartbeatService:
if HEARTBEAT_OK_TOKEN.replace("_", "") in response.upper().replace("_", ""): if HEARTBEAT_OK_TOKEN.replace("_", "") in response.upper().replace("_", ""):
logger.info("Heartbeat: OK (no action needed)") logger.info("Heartbeat: OK (no action needed)")
else: else:
logger.info(f"Heartbeat: completed task") logger.info("Heartbeat: completed task")
except Exception as e: except Exception as e:
logger.error(f"Heartbeat execution failed: {e}") logger.error("Heartbeat execution failed: {}", e)
async def trigger_now(self) -> str | None: async def trigger_now(self) -> str | None:
"""Manually trigger a heartbeat.""" """Manually trigger a heartbeat."""

View File

@@ -2,5 +2,6 @@
from nanobot.providers.base import LLMProvider, LLMResponse from nanobot.providers.base import LLMProvider, LLMResponse
from nanobot.providers.litellm_provider import LiteLLMProvider from nanobot.providers.litellm_provider import LiteLLMProvider
from nanobot.providers.openai_codex_provider import OpenAICodexProvider
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider"] __all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider", "OpenAICodexProvider"]

View File

@@ -0,0 +1,47 @@
"""Direct OpenAI-compatible provider — bypasses LiteLLM."""
from __future__ import annotations
from typing import Any
import json_repair
from openai import AsyncOpenAI
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
class CustomProvider(LLMProvider):
def __init__(self, api_key: str = "no-key", api_base: str = "http://localhost:8000/v1", default_model: str = "default"):
super().__init__(api_key, api_base)
self.default_model = default_model
self._client = AsyncOpenAI(api_key=api_key, base_url=api_base)
async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7) -> LLMResponse:
kwargs: dict[str, Any] = {"model": model or self.default_model, "messages": messages,
"max_tokens": max(1, max_tokens), "temperature": temperature}
if tools:
kwargs.update(tools=tools, tool_choice="auto")
try:
return self._parse(await self._client.chat.completions.create(**kwargs))
except Exception as e:
return LLMResponse(content=f"Error: {e}", finish_reason="error")
def _parse(self, response: Any) -> LLMResponse:
choice = response.choices[0]
msg = choice.message
tool_calls = [
ToolCallRequest(id=tc.id, name=tc.function.name,
arguments=json_repair.loads(tc.function.arguments) if isinstance(tc.function.arguments, str) else tc.function.arguments)
for tc in (msg.tool_calls or [])
]
u = response.usage
return LLMResponse(
content=msg.content, tool_calls=tool_calls, finish_reason=choice.finish_reason or "stop",
usage={"prompt_tokens": u.prompt_tokens, "completion_tokens": u.completion_tokens, "total_tokens": u.total_tokens} if u else {},
reasoning_content=getattr(msg, "reasoning_content", None),
)
def get_default_model(self) -> str:
return self.default_model

View File

@@ -55,6 +55,9 @@ class LiteLLMProvider(LLMProvider):
spec = self._gateway or find_by_model(model) spec = self._gateway or find_by_model(model)
if not spec: if not spec:
return return
if not spec.env_key:
# OAuth/provider-only specs (for example: openai_codex)
return
# Gateway/local overrides existing env; standard provider doesn't # Gateway/local overrides existing env; standard provider doesn't
if self._gateway: if self._gateway:
@@ -85,10 +88,21 @@ class LiteLLMProvider(LLMProvider):
# Standard mode: auto-prefix for known providers # Standard mode: auto-prefix for known providers
spec = find_by_model(model) spec = find_by_model(model)
if spec and spec.litellm_prefix: if spec and spec.litellm_prefix:
model = self._canonicalize_explicit_prefix(model, spec.name, spec.litellm_prefix)
if not any(model.startswith(s) for s in spec.skip_prefixes): if not any(model.startswith(s) for s in spec.skip_prefixes):
model = f"{spec.litellm_prefix}/{model}" model = f"{spec.litellm_prefix}/{model}"
return model return model
@staticmethod
def _canonicalize_explicit_prefix(model: str, spec_name: str, canonical_prefix: str) -> str:
"""Normalize explicit provider prefixes like `github-copilot/...`."""
if "/" not in model:
return model
prefix, remainder = model.split("/", 1)
if prefix.lower().replace("-", "_") != spec_name:
return model
return f"{canonical_prefix}/{remainder}"
def _apply_model_overrides(self, model: str, kwargs: dict[str, Any]) -> None: def _apply_model_overrides(self, model: str, kwargs: dict[str, Any]) -> None:
"""Apply model-specific parameter overrides from the registry.""" """Apply model-specific parameter overrides from the registry."""

View File

@@ -0,0 +1,312 @@
"""OpenAI Codex Responses Provider."""
from __future__ import annotations
import asyncio
import hashlib
import json
from typing import Any, AsyncGenerator
import httpx
from loguru import logger
from oauth_cli_kit import get_token as get_codex_token
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
DEFAULT_CODEX_URL = "https://chatgpt.com/backend-api/codex/responses"
DEFAULT_ORIGINATOR = "nanobot"
class OpenAICodexProvider(LLMProvider):
"""Use Codex OAuth to call the Responses API."""
def __init__(self, default_model: str = "openai-codex/gpt-5.1-codex"):
super().__init__(api_key=None, api_base=None)
self.default_model = default_model
async def chat(
self,
messages: list[dict[str, Any]],
tools: list[dict[str, Any]] | None = None,
model: str | None = None,
max_tokens: int = 4096,
temperature: float = 0.7,
) -> LLMResponse:
model = model or self.default_model
system_prompt, input_items = _convert_messages(messages)
token = await asyncio.to_thread(get_codex_token)
headers = _build_headers(token.account_id, token.access)
body: dict[str, Any] = {
"model": _strip_model_prefix(model),
"store": False,
"stream": True,
"instructions": system_prompt,
"input": input_items,
"text": {"verbosity": "medium"},
"include": ["reasoning.encrypted_content"],
"prompt_cache_key": _prompt_cache_key(messages),
"tool_choice": "auto",
"parallel_tool_calls": True,
}
if tools:
body["tools"] = _convert_tools(tools)
url = DEFAULT_CODEX_URL
try:
try:
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=True)
except Exception as e:
if "CERTIFICATE_VERIFY_FAILED" not in str(e):
raise
logger.warning("SSL certificate verification failed for Codex API; retrying with verify=False")
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=False)
return LLMResponse(
content=content,
tool_calls=tool_calls,
finish_reason=finish_reason,
)
except Exception as e:
return LLMResponse(
content=f"Error calling Codex: {str(e)}",
finish_reason="error",
)
def get_default_model(self) -> str:
return self.default_model
def _strip_model_prefix(model: str) -> str:
if model.startswith("openai-codex/") or model.startswith("openai_codex/"):
return model.split("/", 1)[1]
return model
def _build_headers(account_id: str, token: str) -> dict[str, str]:
return {
"Authorization": f"Bearer {token}",
"chatgpt-account-id": account_id,
"OpenAI-Beta": "responses=experimental",
"originator": DEFAULT_ORIGINATOR,
"User-Agent": "nanobot (python)",
"accept": "text/event-stream",
"content-type": "application/json",
}
async def _request_codex(
url: str,
headers: dict[str, str],
body: dict[str, Any],
verify: bool,
) -> tuple[str, list[ToolCallRequest], str]:
async with httpx.AsyncClient(timeout=60.0, verify=verify) as client:
async with client.stream("POST", url, headers=headers, json=body) as response:
if response.status_code != 200:
text = await response.aread()
raise RuntimeError(_friendly_error(response.status_code, text.decode("utf-8", "ignore")))
return await _consume_sse(response)
def _convert_tools(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Convert OpenAI function-calling schema to Codex flat format."""
converted: list[dict[str, Any]] = []
for tool in tools:
fn = (tool.get("function") or {}) if tool.get("type") == "function" else tool
name = fn.get("name")
if not name:
continue
params = fn.get("parameters") or {}
converted.append({
"type": "function",
"name": name,
"description": fn.get("description") or "",
"parameters": params if isinstance(params, dict) else {},
})
return converted
def _convert_messages(messages: list[dict[str, Any]]) -> tuple[str, list[dict[str, Any]]]:
system_prompt = ""
input_items: list[dict[str, Any]] = []
for idx, msg in enumerate(messages):
role = msg.get("role")
content = msg.get("content")
if role == "system":
system_prompt = content if isinstance(content, str) else ""
continue
if role == "user":
input_items.append(_convert_user_message(content))
continue
if role == "assistant":
# Handle text first.
if isinstance(content, str) and content:
input_items.append(
{
"type": "message",
"role": "assistant",
"content": [{"type": "output_text", "text": content}],
"status": "completed",
"id": f"msg_{idx}",
}
)
# Then handle tool calls.
for tool_call in msg.get("tool_calls", []) or []:
fn = tool_call.get("function") or {}
call_id, item_id = _split_tool_call_id(tool_call.get("id"))
call_id = call_id or f"call_{idx}"
item_id = item_id or f"fc_{idx}"
input_items.append(
{
"type": "function_call",
"id": item_id,
"call_id": call_id,
"name": fn.get("name"),
"arguments": fn.get("arguments") or "{}",
}
)
continue
if role == "tool":
call_id, _ = _split_tool_call_id(msg.get("tool_call_id"))
output_text = content if isinstance(content, str) else json.dumps(content, ensure_ascii=False)
input_items.append(
{
"type": "function_call_output",
"call_id": call_id,
"output": output_text,
}
)
continue
return system_prompt, input_items
def _convert_user_message(content: Any) -> dict[str, Any]:
if isinstance(content, str):
return {"role": "user", "content": [{"type": "input_text", "text": content}]}
if isinstance(content, list):
converted: list[dict[str, Any]] = []
for item in content:
if not isinstance(item, dict):
continue
if item.get("type") == "text":
converted.append({"type": "input_text", "text": item.get("text", "")})
elif item.get("type") == "image_url":
url = (item.get("image_url") or {}).get("url")
if url:
converted.append({"type": "input_image", "image_url": url, "detail": "auto"})
if converted:
return {"role": "user", "content": converted}
return {"role": "user", "content": [{"type": "input_text", "text": ""}]}
def _split_tool_call_id(tool_call_id: Any) -> tuple[str, str | None]:
if isinstance(tool_call_id, str) and tool_call_id:
if "|" in tool_call_id:
call_id, item_id = tool_call_id.split("|", 1)
return call_id, item_id or None
return tool_call_id, None
return "call_0", None
def _prompt_cache_key(messages: list[dict[str, Any]]) -> str:
raw = json.dumps(messages, ensure_ascii=True, sort_keys=True)
return hashlib.sha256(raw.encode("utf-8")).hexdigest()
async def _iter_sse(response: httpx.Response) -> AsyncGenerator[dict[str, Any], None]:
buffer: list[str] = []
async for line in response.aiter_lines():
if line == "":
if buffer:
data_lines = [l[5:].strip() for l in buffer if l.startswith("data:")]
buffer = []
if not data_lines:
continue
data = "\n".join(data_lines).strip()
if not data or data == "[DONE]":
continue
try:
yield json.loads(data)
except Exception:
continue
continue
buffer.append(line)
async def _consume_sse(response: httpx.Response) -> tuple[str, list[ToolCallRequest], str]:
content = ""
tool_calls: list[ToolCallRequest] = []
tool_call_buffers: dict[str, dict[str, Any]] = {}
finish_reason = "stop"
async for event in _iter_sse(response):
event_type = event.get("type")
if event_type == "response.output_item.added":
item = event.get("item") or {}
if item.get("type") == "function_call":
call_id = item.get("call_id")
if not call_id:
continue
tool_call_buffers[call_id] = {
"id": item.get("id") or "fc_0",
"name": item.get("name"),
"arguments": item.get("arguments") or "",
}
elif event_type == "response.output_text.delta":
content += event.get("delta") or ""
elif event_type == "response.function_call_arguments.delta":
call_id = event.get("call_id")
if call_id and call_id in tool_call_buffers:
tool_call_buffers[call_id]["arguments"] += event.get("delta") or ""
elif event_type == "response.function_call_arguments.done":
call_id = event.get("call_id")
if call_id and call_id in tool_call_buffers:
tool_call_buffers[call_id]["arguments"] = event.get("arguments") or ""
elif event_type == "response.output_item.done":
item = event.get("item") or {}
if item.get("type") == "function_call":
call_id = item.get("call_id")
if not call_id:
continue
buf = tool_call_buffers.get(call_id) or {}
args_raw = buf.get("arguments") or item.get("arguments") or "{}"
try:
args = json.loads(args_raw)
except Exception:
args = {"raw": args_raw}
tool_calls.append(
ToolCallRequest(
id=f"{call_id}|{buf.get('id') or item.get('id') or 'fc_0'}",
name=buf.get("name") or item.get("name"),
arguments=args,
)
)
elif event_type == "response.completed":
status = (event.get("response") or {}).get("status")
finish_reason = _map_finish_reason(status)
elif event_type in {"error", "response.failed"}:
raise RuntimeError("Codex response failed")
return content, tool_calls, finish_reason
_FINISH_REASON_MAP = {"completed": "stop", "incomplete": "length", "failed": "error", "cancelled": "error"}
def _map_finish_reason(status: str | None) -> str:
return _FINISH_REASON_MAP.get(status or "completed", "stop")
def _friendly_error(status_code: int, raw: str) -> str:
if status_code == 429:
return "ChatGPT usage quota exceeded or rate limit triggered. Please try again later."
return f"HTTP {status_code}: {raw}"

View File

@@ -51,6 +51,12 @@ class ProviderSpec:
# per-model param overrides, e.g. (("kimi-k2.5", {"temperature": 1.0}),) # per-model param overrides, e.g. (("kimi-k2.5", {"temperature": 1.0}),)
model_overrides: tuple[tuple[str, dict[str, Any]], ...] = () model_overrides: tuple[tuple[str, dict[str, Any]], ...] = ()
# OAuth-based providers (e.g., OpenAI Codex) don't use API keys
is_oauth: bool = False # if True, uses OAuth flow instead of API key
# Direct providers bypass LiteLLM entirely (e.g., CustomProvider)
is_direct: bool = False
@property @property
def label(self) -> str: def label(self) -> str:
return self.display_name or self.name.title() return self.display_name or self.name.title()
@@ -62,18 +68,14 @@ class ProviderSpec:
PROVIDERS: tuple[ProviderSpec, ...] = ( PROVIDERS: tuple[ProviderSpec, ...] = (
# === Custom (user-provided OpenAI-compatible endpoint) ================= # === Custom (direct OpenAI-compatible endpoint, bypasses LiteLLM) ======
# No auto-detection — only activates when user explicitly configures "custom".
ProviderSpec( ProviderSpec(
name="custom", name="custom",
keywords=(), keywords=(),
env_key="OPENAI_API_KEY", env_key="",
display_name="Custom", display_name="Custom",
litellm_prefix="openai", litellm_prefix="",
skip_prefixes=("openai/",), is_direct=True,
is_gateway=True,
strip_model_prefix=True,
), ),
# === Gateways (detected by api_key / api_base, not model name) ========= # === Gateways (detected by api_key / api_base, not model name) =========
@@ -117,6 +119,24 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
model_overrides=(), model_overrides=(),
), ),
# SiliconFlow (硅基流动): OpenAI-compatible gateway, model names keep org prefix
ProviderSpec(
name="siliconflow",
keywords=("siliconflow",),
env_key="OPENAI_API_KEY",
display_name="SiliconFlow",
litellm_prefix="openai",
skip_prefixes=(),
env_extras=(),
is_gateway=True,
is_local=False,
detect_by_key_prefix="",
detect_by_base_keyword="siliconflow",
default_api_base="https://api.siliconflow.cn/v1",
strip_model_prefix=False,
model_overrides=(),
),
# === Standard providers (matched by model-name keywords) =============== # === Standard providers (matched by model-name keywords) ===============
# Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed. # Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed.
@@ -155,6 +175,44 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
model_overrides=(), model_overrides=(),
), ),
# OpenAI Codex: uses OAuth, not API key.
ProviderSpec(
name="openai_codex",
keywords=("openai-codex", "codex"),
env_key="", # OAuth-based, no API key
display_name="OpenAI Codex",
litellm_prefix="", # Not routed through LiteLLM
skip_prefixes=(),
env_extras=(),
is_gateway=False,
is_local=False,
detect_by_key_prefix="",
detect_by_base_keyword="codex",
default_api_base="https://chatgpt.com/backend-api",
strip_model_prefix=False,
model_overrides=(),
is_oauth=True, # OAuth-based authentication
),
# Github Copilot: uses OAuth, not API key.
ProviderSpec(
name="github_copilot",
keywords=("github_copilot", "copilot"),
env_key="", # OAuth-based, no API key
display_name="Github Copilot",
litellm_prefix="github_copilot", # github_copilot/model → github_copilot/model
skip_prefixes=("github_copilot/",),
env_extras=(),
is_gateway=False,
is_local=False,
detect_by_key_prefix="",
detect_by_base_keyword="",
default_api_base="",
strip_model_prefix=False,
model_overrides=(),
is_oauth=True, # OAuth-based authentication
),
# DeepSeek: needs "deepseek/" prefix for LiteLLM routing. # DeepSeek: needs "deepseek/" prefix for LiteLLM routing.
ProviderSpec( ProviderSpec(
name="deepseek", name="deepseek",
@@ -326,10 +384,18 @@ def find_by_model(model: str) -> ProviderSpec | None:
"""Match a standard provider by model-name keyword (case-insensitive). """Match a standard provider by model-name keyword (case-insensitive).
Skips gateways/local — those are matched by api_key/api_base instead.""" Skips gateways/local — those are matched by api_key/api_base instead."""
model_lower = model.lower() model_lower = model.lower()
for spec in PROVIDERS: model_normalized = model_lower.replace("-", "_")
if spec.is_gateway or spec.is_local: model_prefix = model_lower.split("/", 1)[0] if "/" in model_lower else ""
continue normalized_prefix = model_prefix.replace("-", "_")
if any(kw in model_lower for kw in spec.keywords): std_specs = [s for s in PROVIDERS if not s.is_gateway and not s.is_local]
# Prefer explicit provider prefix — prevents `github-copilot/...codex` matching openai_codex.
for spec in std_specs:
if model_prefix and normalized_prefix == spec.name:
return spec
for spec in std_specs:
if any(kw in model_lower or kw.replace("-", "_") in model_normalized for kw in spec.keywords):
return spec return spec
return None return None

View File

@@ -35,7 +35,7 @@ class GroqTranscriptionProvider:
path = Path(file_path) path = Path(file_path)
if not path.exists(): if not path.exists():
logger.error(f"Audio file not found: {file_path}") logger.error("Audio file not found: {}", file_path)
return "" return ""
try: try:
@@ -61,5 +61,5 @@ class GroqTranscriptionProvider:
return data.get("text", "") return data.get("text", "")
except Exception as e: except Exception as e:
logger.error(f"Groq transcription error: {e}") logger.error("Groq transcription error: {}", e)
return "" return ""

View File

@@ -42,8 +42,15 @@ class Session:
self.updated_at = datetime.now() self.updated_at = datetime.now()
def get_history(self, max_messages: int = 500) -> list[dict[str, Any]]: def get_history(self, max_messages: int = 500) -> list[dict[str, Any]]:
"""Get recent messages in LLM format (role + content only).""" """Get recent messages in LLM format, preserving tool metadata."""
return [{"role": m["role"], "content": m["content"]} for m in self.messages[-max_messages:]] out: list[dict[str, Any]] = []
for m in self.messages[-max_messages:]:
entry: dict[str, Any] = {"role": m["role"], "content": m.get("content", "")}
for k in ("tool_calls", "tool_call_id", "name"):
if k in m:
entry[k] = m[k]
out.append(entry)
return out
def clear(self) -> None: def clear(self) -> None:
"""Clear all messages and reset session to initial state.""" """Clear all messages and reset session to initial state."""
@@ -61,13 +68,19 @@ class SessionManager:
def __init__(self, workspace: Path): def __init__(self, workspace: Path):
self.workspace = workspace self.workspace = workspace
self.sessions_dir = ensure_dir(Path.home() / ".nanobot" / "sessions") self.sessions_dir = ensure_dir(self.workspace / "sessions")
self.legacy_sessions_dir = Path.home() / ".nanobot" / "sessions"
self._cache: dict[str, Session] = {} self._cache: dict[str, Session] = {}
def _get_session_path(self, key: str) -> Path: def _get_session_path(self, key: str) -> Path:
"""Get the file path for a session.""" """Get the file path for a session."""
safe_key = safe_filename(key.replace(":", "_")) safe_key = safe_filename(key.replace(":", "_"))
return self.sessions_dir / f"{safe_key}.jsonl" return self.sessions_dir / f"{safe_key}.jsonl"
def _get_legacy_session_path(self, key: str) -> Path:
"""Legacy global session path (~/.nanobot/sessions/)."""
safe_key = safe_filename(key.replace(":", "_"))
return self.legacy_sessions_dir / f"{safe_key}.jsonl"
def get_or_create(self, key: str) -> Session: def get_or_create(self, key: str) -> Session:
""" """
@@ -92,6 +105,12 @@ class SessionManager:
def _load(self, key: str) -> Session | None: def _load(self, key: str) -> Session | None:
"""Load a session from disk.""" """Load a session from disk."""
path = self._get_session_path(key) path = self._get_session_path(key)
if not path.exists():
legacy_path = self._get_legacy_session_path(key)
if legacy_path.exists():
import shutil
shutil.move(str(legacy_path), str(path))
logger.info("Migrated session {} from legacy path", key)
if not path.exists(): if not path.exists():
return None return None
@@ -102,7 +121,7 @@ class SessionManager:
created_at = None created_at = None
last_consolidated = 0 last_consolidated = 0
with open(path) as f: with open(path, encoding="utf-8") as f:
for line in f: for line in f:
line = line.strip() line = line.strip()
if not line: if not line:
@@ -125,14 +144,14 @@ class SessionManager:
last_consolidated=last_consolidated last_consolidated=last_consolidated
) )
except Exception as e: except Exception as e:
logger.warning(f"Failed to load session {key}: {e}") logger.warning("Failed to load session {}: {}", key, e)
return None return None
def save(self, session: Session) -> None: def save(self, session: Session) -> None:
"""Save a session to disk.""" """Save a session to disk."""
path = self._get_session_path(session.key) path = self._get_session_path(session.key)
with open(path, "w") as f: with open(path, "w", encoding="utf-8") as f:
metadata_line = { metadata_line = {
"_type": "metadata", "_type": "metadata",
"created_at": session.created_at.isoformat(), "created_at": session.created_at.isoformat(),
@@ -140,9 +159,9 @@ class SessionManager:
"metadata": session.metadata, "metadata": session.metadata,
"last_consolidated": session.last_consolidated "last_consolidated": session.last_consolidated
} }
f.write(json.dumps(metadata_line) + "\n") f.write(json.dumps(metadata_line, ensure_ascii=False) + "\n")
for msg in session.messages: for msg in session.messages:
f.write(json.dumps(msg) + "\n") f.write(json.dumps(msg, ensure_ascii=False) + "\n")
self._cache[session.key] = session self._cache[session.key] = session
@@ -162,7 +181,7 @@ class SessionManager:
for path in self.sessions_dir.glob("*.jsonl"): for path in self.sessions_dir.glob("*.jsonl"):
try: try:
# Read just the metadata line # Read just the metadata line
with open(path) as f: with open(path, encoding="utf-8") as f:
first_line = f.readline().strip() first_line = f.readline().strip()
if first_line: if first_line:
data = json.loads(first_line) data = json.loads(first_line)

View File

@@ -21,4 +21,5 @@ The skill format and metadata structure follow OpenClaw's conventions to maintai
| `weather` | Get weather info using wttr.in and Open-Meteo | | `weather` | Get weather info using wttr.in and Open-Meteo |
| `summarize` | Summarize URLs, files, and YouTube videos | | `summarize` | Summarize URLs, files, and YouTube videos |
| `tmux` | Remote-control tmux sessions | | `tmux` | Remote-control tmux sessions |
| `clawhub` | Search and install skills from ClawHub registry |
| `skill-creator` | Create new skills | | `skill-creator` | Create new skills |

View File

@@ -0,0 +1,53 @@
---
name: clawhub
description: Search and install agent skills from ClawHub, the public skill registry.
homepage: https://clawhub.ai
metadata: {"nanobot":{"emoji":"🦞"}}
---
# ClawHub
Public skill registry for AI agents. Search by natural language (vector search).
## When to use
Use this skill when the user asks any of:
- "find a skill for …"
- "search for skills"
- "install a skill"
- "what skills are available?"
- "update my skills"
## Search
```bash
npx --yes clawhub@latest search "web scraping" --limit 5
```
## Install
```bash
npx --yes clawhub@latest install <slug> --workdir ~/.nanobot/workspace
```
Replace `<slug>` with the skill name from search results. This places the skill into `~/.nanobot/workspace/skills/`, where nanobot loads workspace skills from. Always include `--workdir`.
## Update
```bash
npx --yes clawhub@latest update --all --workdir ~/.nanobot/workspace
```
## List installed
```bash
npx --yes clawhub@latest list --workdir ~/.nanobot/workspace
```
## Notes
- Requires Node.js (`npx` comes with it).
- No API key needed for search and install.
- Login (`npx --yes clawhub@latest login`) is only required for publishing.
- `--workdir ~/.nanobot/workspace` is critical — without it, skills install to the current directory instead of the nanobot workspace.
- After install, remind the user to start a new session to load the skill.

View File

@@ -30,6 +30,11 @@ One-time scheduled task (compute ISO datetime from current time):
cron(action="add", message="Remind me about the meeting", at="<ISO datetime>") cron(action="add", message="Remind me about the meeting", at="<ISO datetime>")
``` ```
Timezone-aware cron:
```
cron(action="add", message="Morning standup", cron_expr="0 9 * * 1-5", tz="America/Vancouver")
```
List/remove: List/remove:
``` ```
cron(action="list") cron(action="list")
@@ -44,4 +49,9 @@ cron(action="remove", job_id="abc123")
| every hour | every_seconds: 3600 | | every hour | every_seconds: 3600 |
| every day at 8am | cron_expr: "0 8 * * *" | | every day at 8am | cron_expr: "0 8 * * *" |
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" | | weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
| 9am Vancouver time daily | cron_expr: "0 9 * * *", tz: "America/Vancouver" |
| at a specific time | at: ISO datetime string (compute from current time) | | at a specific time | at: ISO datetime string (compute from current time) |
## Timezone
Use `tz` with `cron_expr` to schedule in a specific IANA timezone. Without `tz`, the server's local timezone is used.

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "nanobot-ai" name = "nanobot-ai"
version = "0.1.3.post7" version = "0.1.4"
description = "A lightweight personal AI assistant framework" description = "A lightweight personal AI assistant framework"
requires-python = ">=3.11" requires-python = ">=3.11"
license = {text = "MIT"} license = {text = "MIT"}
@@ -17,35 +17,37 @@ classifiers = [
] ]
dependencies = [ dependencies = [
"typer>=0.9.0", "typer>=0.20.0,<1.0.0",
"litellm>=1.0.0", "litellm>=1.81.5,<2.0.0",
"pydantic>=2.0.0", "pydantic>=2.12.0,<3.0.0",
"pydantic-settings>=2.0.0", "pydantic-settings>=2.12.0,<3.0.0",
"websockets>=12.0", "websockets>=16.0,<17.0",
"websocket-client>=1.6.0", "websocket-client>=1.9.0,<2.0.0",
"httpx[socks]>=0.25.0", "httpx>=0.28.0,<1.0.0",
"loguru>=0.7.0", "oauth-cli-kit>=0.1.3,<1.0.0",
"readability-lxml>=0.8.0", "loguru>=0.7.3,<1.0.0",
"rich>=13.0.0", "readability-lxml>=0.8.4,<1.0.0",
"croniter>=2.0.0", "rich>=14.0.0,<15.0.0",
"dingtalk-stream>=0.4.0", "croniter>=6.0.0,<7.0.0",
"python-telegram-bot[socks]>=21.0", "dingtalk-stream>=0.24.0,<1.0.0",
"lark-oapi>=1.0.0", "python-telegram-bot[socks]>=22.0,<23.0",
"socksio>=1.0.0", "lark-oapi>=1.5.0,<2.0.0",
"python-socketio>=5.11.0", "socksio>=1.0.0,<2.0.0",
"msgpack>=1.0.8", "python-socketio>=5.16.0,<6.0.0",
"slack-sdk>=3.26.0", "msgpack>=1.1.0,<2.0.0",
"qq-botpy>=1.0.0", "slack-sdk>=3.39.0,<4.0.0",
"python-socks[asyncio]>=2.4.0", "slackify-markdown>=0.2.0,<1.0.0",
"prompt-toolkit>=3.0.0", "qq-botpy>=1.2.0,<2.0.0",
"mcp>=1.0.0", "python-socks[asyncio]>=2.8.0,<3.0.0",
"json-repair>=0.30.0", "prompt-toolkit>=3.0.50,<4.0.0",
"mcp>=1.26.0,<2.0.0",
"json-repair>=0.57.0,<1.0.0",
] ]
[project.optional-dependencies] [project.optional-dependencies]
dev = [ dev = [
"pytest>=7.0.0", "pytest>=9.0.0,<10.0.0",
"pytest-asyncio>=0.21.0", "pytest-asyncio>=1.3.0,<2.0.0",
"ruff>=0.1.0", "ruff>=0.1.0",
] ]

View File

@@ -6,6 +6,10 @@ import pytest
from typer.testing import CliRunner from typer.testing import CliRunner
from nanobot.cli.commands import app from nanobot.cli.commands import app
from nanobot.config.schema import Config
from nanobot.providers.litellm_provider import LiteLLMProvider
from nanobot.providers.openai_codex_provider import _strip_model_prefix
from nanobot.providers.registry import find_by_model
runner = CliRunner() runner = CliRunner()
@@ -90,3 +94,37 @@ def test_onboard_existing_workspace_safe_create(mock_paths):
assert "Created workspace" not in result.stdout assert "Created workspace" not in result.stdout
assert "Created AGENTS.md" in result.stdout assert "Created AGENTS.md" in result.stdout
assert (workspace_dir / "AGENTS.md").exists() assert (workspace_dir / "AGENTS.md").exists()
def test_config_matches_github_copilot_codex_with_hyphen_prefix():
config = Config()
config.agents.defaults.model = "github-copilot/gpt-5.3-codex"
assert config.get_provider_name() == "github_copilot"
def test_config_matches_openai_codex_with_hyphen_prefix():
config = Config()
config.agents.defaults.model = "openai-codex/gpt-5.1-codex"
assert config.get_provider_name() == "openai_codex"
def test_find_by_model_prefers_explicit_prefix_over_generic_codex_keyword():
spec = find_by_model("github-copilot/gpt-5.3-codex")
assert spec is not None
assert spec.name == "github_copilot"
def test_litellm_provider_canonicalizes_github_copilot_hyphen_prefix():
provider = LiteLLMProvider(default_model="github-copilot/gpt-5.3-codex")
resolved = provider._resolve_model("github-copilot/gpt-5.3-codex")
assert resolved == "github_copilot/gpt-5.3-codex"
def test_openai_codex_strip_prefix_supports_hyphen_and_underscore():
assert _strip_model_prefix("openai-codex/gpt-5.1-codex") == "gpt-5.1-codex"
assert _strip_model_prefix("openai_codex/gpt-5.1-codex") == "gpt-5.1-codex"

View File

@@ -0,0 +1,29 @@
from typer.testing import CliRunner
from nanobot.cli.commands import app
runner = CliRunner()
def test_cron_add_rejects_invalid_timezone(monkeypatch, tmp_path) -> None:
monkeypatch.setattr("nanobot.config.loader.get_data_dir", lambda: tmp_path)
result = runner.invoke(
app,
[
"cron",
"add",
"--name",
"demo",
"--message",
"hello",
"--cron",
"0 9 * * *",
"--tz",
"America/Vancovuer",
],
)
assert result.exit_code == 1
assert "Error: unknown timezone 'America/Vancovuer'" in result.stdout
assert not (tmp_path / "cron" / "jobs.json").exists()

View File

@@ -0,0 +1,30 @@
import pytest
from nanobot.cron.service import CronService
from nanobot.cron.types import CronSchedule
def test_add_job_rejects_unknown_timezone(tmp_path) -> None:
service = CronService(tmp_path / "cron" / "jobs.json")
with pytest.raises(ValueError, match="unknown timezone 'America/Vancovuer'"):
service.add_job(
name="tz typo",
schedule=CronSchedule(kind="cron", expr="0 9 * * *", tz="America/Vancovuer"),
message="hello",
)
assert service.list_jobs(include_disabled=True) == []
def test_add_job_accepts_valid_timezone(tmp_path) -> None:
service = CronService(tmp_path / "cron" / "jobs.json")
job = service.add_job(
name="tz ok",
schedule=CronSchedule(kind="cron", expr="0 9 * * *", tz="America/Vancouver"),
message="hello",
)
assert job.schedule.tz == "America/Vancouver"
assert job.state.next_run_at_ms is not None