Merge branch 'main' into pr-807

This commit is contained in:
Re-bin
2026-02-20 08:47:13 +00:00
32 changed files with 629 additions and 314 deletions

View File

@@ -16,28 +16,28 @@
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines. ⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
📏 Real-time line count: **3,761 lines** (run `bash core_agent_lines.sh` to verify anytime) 📏 Real-time line count: **3,793 lines** (run `bash core_agent_lines.sh` to verify anytime)
## 📢 News ## 📢 News
- **2026-02-17** 🎉 Released v0.1.4 — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details. - **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details.
- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills. - **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills.
- **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support. - **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
- **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details. - **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.
- **2026-02-13** 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details. - **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it! - **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support! - **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!
- **2026-02-10** 🎉 Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431). - **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms! - **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers). - **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
<details> <details>
<summary>Earlier news</summary> <summary>Earlier news</summary>
- **2026-02-07** 🚀 Released v0.1.3.post5 with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details. - **2026-02-07** 🚀 Released **v0.1.3.post5** with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details.
- **2026-02-06** ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening! - **2026-02-06** ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
- **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support! - **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
- **2026-02-04** 🚀 Released v0.1.3.post4 with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details. - **2026-02-04** 🚀 Released **v0.1.3.post4** with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details.
- **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling! - **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!
- **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot! - **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!
@@ -578,6 +578,7 @@ Config file: `~/.nanobot/config.json`
> - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed. > - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
> - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"` in your zhipu provider config. > - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"` in your zhipu provider config.
> - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `"apiBase": "https://api.minimaxi.com/v1"` in your minimax provider config. > - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `"apiBase": "https://api.minimaxi.com/v1"` in your minimax provider config.
> - **VolcEngine Coding Plan**: If you're on VolcEngine's coding plan, set `"apiBase": "https://ark.cn-beijing.volces.com/api/coding/v3"` in your volcengine provider config.
| Provider | Purpose | Get API Key | | Provider | Purpose | Get API Key |
|----------|---------|-------------| |----------|---------|-------------|
@@ -590,7 +591,8 @@ Config file: `~/.nanobot/config.json`
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) | | `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) | | `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) | | `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
| `siliconflow` | LLM (SiliconFlow/硅基流动, API gateway) | [siliconflow.cn](https://siliconflow.cn) | | `siliconflow` | LLM (SiliconFlow/硅基流动) | [siliconflow.cn](https://siliconflow.cn) |
| `volcengine` | LLM (VolcEngine/火山引擎) | [volcengine.com](https://www.volcengine.com) |
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) | | `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) | | `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) | | `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |

View File

@@ -89,16 +89,17 @@ class AgentLoop:
self._mcp_servers = mcp_servers or {} self._mcp_servers = mcp_servers or {}
self._mcp_stack: AsyncExitStack | None = None self._mcp_stack: AsyncExitStack | None = None
self._mcp_connected = False self._mcp_connected = False
self._consolidating: set[str] = set() # Session keys with consolidation in progress
self._register_default_tools() self._register_default_tools()
def _register_default_tools(self) -> None: def _register_default_tools(self) -> None:
"""Register the default set of tools.""" """Register the default set of tools."""
# File tools (restrict to workspace if configured) # File tools (workspace for relative paths, restrict if configured)
allowed_dir = self.workspace if self.restrict_to_workspace else None allowed_dir = self.workspace if self.restrict_to_workspace else None
self.tools.register(ReadFileTool(allowed_dir=allowed_dir)) self.tools.register(ReadFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(WriteFileTool(allowed_dir=allowed_dir)) self.tools.register(WriteFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(EditFileTool(allowed_dir=allowed_dir)) self.tools.register(EditFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
self.tools.register(ListDirTool(allowed_dir=allowed_dir)) self.tools.register(ListDirTool(workspace=self.workspace, allowed_dir=allowed_dir))
# Shell tool # Shell tool
self.tools.register(ExecTool( self.tools.register(ExecTool(
@@ -183,6 +184,7 @@ class AgentLoop:
iteration = 0 iteration = 0
final_content = None final_content = None
tools_used: list[str] = [] tools_used: list[str] = []
text_only_retried = False
while iteration < self.max_iterations: while iteration < self.max_iterations:
iteration += 1 iteration += 1
@@ -206,7 +208,7 @@ class AgentLoop:
"type": "function", "type": "function",
"function": { "function": {
"name": tc.name, "name": tc.name,
"arguments": json.dumps(tc.arguments) "arguments": json.dumps(tc.arguments, ensure_ascii=False)
} }
} }
for tc in response.tool_calls for tc in response.tool_calls
@@ -219,13 +221,24 @@ class AgentLoop:
for tool_call in response.tool_calls: for tool_call in response.tool_calls:
tools_used.append(tool_call.name) tools_used.append(tool_call.name)
args_str = json.dumps(tool_call.arguments, ensure_ascii=False) args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
logger.info(f"Tool call: {tool_call.name}({args_str[:200]})") logger.info("Tool call: {}({})", tool_call.name, args_str[:200])
result = await self.tools.execute(tool_call.name, tool_call.arguments) result = await self.tools.execute(tool_call.name, tool_call.arguments)
messages = self.context.add_tool_result( messages = self.context.add_tool_result(
messages, tool_call.id, tool_call.name, result messages, tool_call.id, tool_call.name, result
) )
else: else:
final_content = self._strip_think(response.content) final_content = self._strip_think(response.content)
# Some models send an interim text response before tool calls.
# Give them one retry; don't forward the text to avoid duplicates.
if not tools_used and not text_only_retried and final_content:
text_only_retried = True
logger.debug("Interim text response (no tools used yet), retrying: {}", final_content[:80])
messages = self.context.add_assistant_message(
messages, response.content,
reasoning_content=response.reasoning_content,
)
final_content = None
continue
break break
return final_content, tools_used return final_content, tools_used
@@ -247,7 +260,7 @@ class AgentLoop:
if response: if response:
await self.bus.publish_outbound(response) await self.bus.publish_outbound(response)
except Exception as e: except Exception as e:
logger.error(f"Error processing message: {e}") logger.error("Error processing message: {}", e)
await self.bus.publish_outbound(OutboundMessage( await self.bus.publish_outbound(OutboundMessage(
channel=msg.channel, channel=msg.channel,
chat_id=msg.chat_id, chat_id=msg.chat_id,
@@ -292,7 +305,7 @@ class AgentLoop:
return await self._process_system_message(msg) return await self._process_system_message(msg)
preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content
logger.info(f"Processing message from {msg.channel}:{msg.sender_id}: {preview}") logger.info("Processing message from {}:{}: {}", msg.channel, msg.sender_id, preview)
key = session_key or msg.session_key key = session_key or msg.session_key
session = self.sessions.get_or_create(key) session = self.sessions.get_or_create(key)
@@ -318,8 +331,16 @@ class AgentLoop:
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id, return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id,
content="🐈 nanobot commands:\n/new — Start a new conversation\n/help — Show available commands") content="🐈 nanobot commands:\n/new — Start a new conversation\n/help — Show available commands")
if len(session.messages) > self.memory_window: if len(session.messages) > self.memory_window and session.key not in self._consolidating:
asyncio.create_task(self._consolidate_memory(session)) self._consolidating.add(session.key)
async def _consolidate_and_unlock():
try:
await self._consolidate_memory(session)
finally:
self._consolidating.discard(session.key)
asyncio.create_task(_consolidate_and_unlock())
self._set_tool_context(msg.channel, msg.chat_id) self._set_tool_context(msg.channel, msg.chat_id)
initial_messages = self.context.build_messages( initial_messages = self.context.build_messages(
@@ -344,7 +365,7 @@ class AgentLoop:
final_content = "I've completed processing but have no response to give." final_content = "I've completed processing but have no response to give."
preview = final_content[:120] + "..." if len(final_content) > 120 else final_content preview = final_content[:120] + "..." if len(final_content) > 120 else final_content
logger.info(f"Response to {msg.channel}:{msg.sender_id}: {preview}") logger.info("Response to {}:{}: {}", msg.channel, msg.sender_id, preview)
session.add_message("user", msg.content) session.add_message("user", msg.content)
session.add_message("assistant", final_content, session.add_message("assistant", final_content,
@@ -365,7 +386,7 @@ class AgentLoop:
The chat_id field contains "original_channel:original_chat_id" to route The chat_id field contains "original_channel:original_chat_id" to route
the response back to the correct destination. the response back to the correct destination.
""" """
logger.info(f"Processing system message from {msg.sender_id}") logger.info("Processing system message from {}", msg.sender_id)
# Parse origin from chat_id (format: "channel:chat_id") # Parse origin from chat_id (format: "channel:chat_id")
if ":" in msg.chat_id: if ":" in msg.chat_id:
@@ -413,22 +434,22 @@ class AgentLoop:
if archive_all: if archive_all:
old_messages = session.messages old_messages = session.messages
keep_count = 0 keep_count = 0
logger.info(f"Memory consolidation (archive_all): {len(session.messages)} total messages archived") logger.info("Memory consolidation (archive_all): {} total messages archived", len(session.messages))
else: else:
keep_count = self.memory_window // 2 keep_count = self.memory_window // 2
if len(session.messages) <= keep_count: if len(session.messages) <= keep_count:
logger.debug(f"Session {session.key}: No consolidation needed (messages={len(session.messages)}, keep={keep_count})") logger.debug("Session {}: No consolidation needed (messages={}, keep={})", session.key, len(session.messages), keep_count)
return return
messages_to_process = len(session.messages) - session.last_consolidated messages_to_process = len(session.messages) - session.last_consolidated
if messages_to_process <= 0: if messages_to_process <= 0:
logger.debug(f"Session {session.key}: No new messages to consolidate (last_consolidated={session.last_consolidated}, total={len(session.messages)})") logger.debug("Session {}: No new messages to consolidate (last_consolidated={}, total={})", session.key, session.last_consolidated, len(session.messages))
return return
old_messages = session.messages[session.last_consolidated:-keep_count] old_messages = session.messages[session.last_consolidated:-keep_count]
if not old_messages: if not old_messages:
return return
logger.info(f"Memory consolidation started: {len(session.messages)} total, {len(old_messages)} new to consolidate, {keep_count} keep") logger.info("Memory consolidation started: {} total, {} new to consolidate, {} keep", len(session.messages), len(old_messages), keep_count)
lines = [] lines = []
for m in old_messages: for m in old_messages:
@@ -451,6 +472,14 @@ class AgentLoop:
## Conversation to Process ## Conversation to Process
{conversation} {conversation}
**IMPORTANT**: Both values MUST be strings, not objects or arrays.
Example:
{{
"history_entry": "[2026-02-14 22:50] User asked about...",
"memory_update": "- Host: HARRYBOOK-T14P\n- Name: Nado"
}}
Respond with ONLY valid JSON, no markdown fences.""" Respond with ONLY valid JSON, no markdown fences."""
try: try:
@@ -469,12 +498,18 @@ Respond with ONLY valid JSON, no markdown fences."""
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip() text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
result = json_repair.loads(text) result = json_repair.loads(text)
if not isinstance(result, dict): if not isinstance(result, dict):
logger.warning(f"Memory consolidation: unexpected response type, skipping. Response: {text[:200]}") logger.warning("Memory consolidation: unexpected response type, skipping. Response: {}", text[:200])
return return
if entry := result.get("history_entry"): if entry := result.get("history_entry"):
# Defensive: ensure entry is a string (LLM may return dict)
if not isinstance(entry, str):
entry = json.dumps(entry, ensure_ascii=False)
memory.append_history(entry) memory.append_history(entry)
if update := result.get("memory_update"): if update := result.get("memory_update"):
# Defensive: ensure update is a string
if not isinstance(update, str):
update = json.dumps(update, ensure_ascii=False)
if update != current_memory: if update != current_memory:
memory.write_long_term(update) memory.write_long_term(update)
@@ -482,9 +517,9 @@ Respond with ONLY valid JSON, no markdown fences."""
session.last_consolidated = 0 session.last_consolidated = 0
else: else:
session.last_consolidated = len(session.messages) - keep_count session.last_consolidated = len(session.messages) - keep_count
logger.info(f"Memory consolidation done: {len(session.messages)} messages, last_consolidated={session.last_consolidated}") logger.info("Memory consolidation done: {} messages, last_consolidated={}", len(session.messages), session.last_consolidated)
except Exception as e: except Exception as e:
logger.error(f"Memory consolidation failed: {e}") logger.error("Memory consolidation failed: {}", e)
async def process_direct( async def process_direct(
self, self,

View File

@@ -86,7 +86,7 @@ class SubagentManager:
# Cleanup when done # Cleanup when done
bg_task.add_done_callback(lambda _: self._running_tasks.pop(task_id, None)) bg_task.add_done_callback(lambda _: self._running_tasks.pop(task_id, None))
logger.info(f"Spawned subagent [{task_id}]: {display_label}") logger.info("Spawned subagent [{}]: {}", task_id, display_label)
return f"Subagent [{display_label}] started (id: {task_id}). I'll notify you when it completes." return f"Subagent [{display_label}] started (id: {task_id}). I'll notify you when it completes."
async def _run_subagent( async def _run_subagent(
@@ -97,16 +97,16 @@ class SubagentManager:
origin: dict[str, str], origin: dict[str, str],
) -> None: ) -> None:
"""Execute the subagent task and announce the result.""" """Execute the subagent task and announce the result."""
logger.info(f"Subagent [{task_id}] starting task: {label}") logger.info("Subagent [{}] starting task: {}", task_id, label)
try: try:
# Build subagent tools (no message tool, no spawn tool) # Build subagent tools (no message tool, no spawn tool)
tools = ToolRegistry() tools = ToolRegistry()
allowed_dir = self.workspace if self.restrict_to_workspace else None allowed_dir = self.workspace if self.restrict_to_workspace else None
tools.register(ReadFileTool(allowed_dir=allowed_dir)) tools.register(ReadFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(WriteFileTool(allowed_dir=allowed_dir)) tools.register(WriteFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(EditFileTool(allowed_dir=allowed_dir)) tools.register(EditFileTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(ListDirTool(allowed_dir=allowed_dir)) tools.register(ListDirTool(workspace=self.workspace, allowed_dir=allowed_dir))
tools.register(ExecTool( tools.register(ExecTool(
working_dir=str(self.workspace), working_dir=str(self.workspace),
timeout=self.exec_config.timeout, timeout=self.exec_config.timeout,
@@ -146,7 +146,7 @@ class SubagentManager:
"type": "function", "type": "function",
"function": { "function": {
"name": tc.name, "name": tc.name,
"arguments": json.dumps(tc.arguments), "arguments": json.dumps(tc.arguments, ensure_ascii=False),
}, },
} }
for tc in response.tool_calls for tc in response.tool_calls
@@ -159,8 +159,8 @@ class SubagentManager:
# Execute tools # Execute tools
for tool_call in response.tool_calls: for tool_call in response.tool_calls:
args_str = json.dumps(tool_call.arguments) args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
logger.debug(f"Subagent [{task_id}] executing: {tool_call.name} with arguments: {args_str}") logger.debug("Subagent [{}] executing: {} with arguments: {}", task_id, tool_call.name, args_str)
result = await tools.execute(tool_call.name, tool_call.arguments) result = await tools.execute(tool_call.name, tool_call.arguments)
messages.append({ messages.append({
"role": "tool", "role": "tool",
@@ -175,12 +175,12 @@ class SubagentManager:
if final_result is None: if final_result is None:
final_result = "Task completed but no final response was generated." final_result = "Task completed but no final response was generated."
logger.info(f"Subagent [{task_id}] completed successfully") logger.info("Subagent [{}] completed successfully", task_id)
await self._announce_result(task_id, label, task, final_result, origin, "ok") await self._announce_result(task_id, label, task, final_result, origin, "ok")
except Exception as e: except Exception as e:
error_msg = f"Error: {str(e)}" error_msg = f"Error: {str(e)}"
logger.error(f"Subagent [{task_id}] failed: {e}") logger.error("Subagent [{}] failed: {}", task_id, e)
await self._announce_result(task_id, label, task, error_msg, origin, "error") await self._announce_result(task_id, label, task, error_msg, origin, "error")
async def _announce_result( async def _announce_result(
@@ -213,7 +213,7 @@ Summarize this naturally for the user. Keep it brief (1-2 sentences). Do not men
) )
await self.bus.publish_inbound(msg) await self.bus.publish_inbound(msg)
logger.debug(f"Subagent [{task_id}] announced result to {origin['channel']}:{origin['chat_id']}") logger.debug("Subagent [{}] announced result to {}:{}", task_id, origin['channel'], origin['chat_id'])
def _build_subagent_prompt(self, task: str) -> str: def _build_subagent_prompt(self, task: str) -> str:
"""Build a focused system prompt for the subagent.""" """Build a focused system prompt for the subagent."""

View File

@@ -6,9 +6,12 @@ from typing import Any
from nanobot.agent.tools.base import Tool from nanobot.agent.tools.base import Tool
def _resolve_path(path: str, allowed_dir: Path | None = None) -> Path: def _resolve_path(path: str, workspace: Path | None = None, allowed_dir: Path | None = None) -> Path:
"""Resolve path and optionally enforce directory restriction.""" """Resolve path against workspace (if relative) and enforce directory restriction."""
resolved = Path(path).expanduser().resolve() p = Path(path).expanduser()
if not p.is_absolute() and workspace:
p = workspace / p
resolved = p.resolve()
if allowed_dir and not str(resolved).startswith(str(allowed_dir.resolve())): if allowed_dir and not str(resolved).startswith(str(allowed_dir.resolve())):
raise PermissionError(f"Path {path} is outside allowed directory {allowed_dir}") raise PermissionError(f"Path {path} is outside allowed directory {allowed_dir}")
return resolved return resolved
@@ -17,7 +20,8 @@ def _resolve_path(path: str, allowed_dir: Path | None = None) -> Path:
class ReadFileTool(Tool): class ReadFileTool(Tool):
"""Tool to read file contents.""" """Tool to read file contents."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -43,7 +47,7 @@ class ReadFileTool(Tool):
async def execute(self, path: str, **kwargs: Any) -> str: async def execute(self, path: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not file_path.exists(): if not file_path.exists():
return f"Error: File not found: {path}" return f"Error: File not found: {path}"
if not file_path.is_file(): if not file_path.is_file():
@@ -60,7 +64,8 @@ class ReadFileTool(Tool):
class WriteFileTool(Tool): class WriteFileTool(Tool):
"""Tool to write content to a file.""" """Tool to write content to a file."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -90,10 +95,10 @@ class WriteFileTool(Tool):
async def execute(self, path: str, content: str, **kwargs: Any) -> str: async def execute(self, path: str, content: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
file_path.parent.mkdir(parents=True, exist_ok=True) file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text(content, encoding="utf-8") file_path.write_text(content, encoding="utf-8")
return f"Successfully wrote {len(content)} bytes to {path}" return f"Successfully wrote {len(content)} bytes to {file_path}"
except PermissionError as e: except PermissionError as e:
return f"Error: {e}" return f"Error: {e}"
except Exception as e: except Exception as e:
@@ -103,7 +108,8 @@ class WriteFileTool(Tool):
class EditFileTool(Tool): class EditFileTool(Tool):
"""Tool to edit a file by replacing text.""" """Tool to edit a file by replacing text."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -137,7 +143,7 @@ class EditFileTool(Tool):
async def execute(self, path: str, old_text: str, new_text: str, **kwargs: Any) -> str: async def execute(self, path: str, old_text: str, new_text: str, **kwargs: Any) -> str:
try: try:
file_path = _resolve_path(path, self._allowed_dir) file_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not file_path.exists(): if not file_path.exists():
return f"Error: File not found: {path}" return f"Error: File not found: {path}"
@@ -154,7 +160,7 @@ class EditFileTool(Tool):
new_content = content.replace(old_text, new_text, 1) new_content = content.replace(old_text, new_text, 1)
file_path.write_text(new_content, encoding="utf-8") file_path.write_text(new_content, encoding="utf-8")
return f"Successfully edited {path}" return f"Successfully edited {file_path}"
except PermissionError as e: except PermissionError as e:
return f"Error: {e}" return f"Error: {e}"
except Exception as e: except Exception as e:
@@ -164,7 +170,8 @@ class EditFileTool(Tool):
class ListDirTool(Tool): class ListDirTool(Tool):
"""Tool to list directory contents.""" """Tool to list directory contents."""
def __init__(self, allowed_dir: Path | None = None): def __init__(self, workspace: Path | None = None, allowed_dir: Path | None = None):
self._workspace = workspace
self._allowed_dir = allowed_dir self._allowed_dir = allowed_dir
@property @property
@@ -190,7 +197,7 @@ class ListDirTool(Tool):
async def execute(self, path: str, **kwargs: Any) -> str: async def execute(self, path: str, **kwargs: Any) -> str:
try: try:
dir_path = _resolve_path(path, self._allowed_dir) dir_path = _resolve_path(path, self._workspace, self._allowed_dir)
if not dir_path.exists(): if not dir_path.exists():
return f"Error: Directory not found: {path}" return f"Error: Directory not found: {path}"
if not dir_path.is_dir(): if not dir_path.is_dir():

View File

@@ -75,7 +75,7 @@ async def connect_mcp_servers(
streamable_http_client(cfg.url) streamable_http_client(cfg.url)
) )
else: else:
logger.warning(f"MCP server '{name}': no command or url configured, skipping") logger.warning("MCP server '{}': no command or url configured, skipping", name)
continue continue
session = await stack.enter_async_context(ClientSession(read, write)) session = await stack.enter_async_context(ClientSession(read, write))
@@ -85,8 +85,8 @@ async def connect_mcp_servers(
for tool_def in tools.tools: for tool_def in tools.tools:
wrapper = MCPToolWrapper(session, name, tool_def) wrapper = MCPToolWrapper(session, name, tool_def)
registry.register(wrapper) registry.register(wrapper)
logger.debug(f"MCP: registered tool '{wrapper.name}' from server '{name}'") logger.debug("MCP: registered tool '{}' from server '{}'", wrapper.name, name)
logger.info(f"MCP server '{name}': connected, {len(tools.tools)} tools registered") logger.info("MCP server '{}': connected, {} tools registered", name, len(tools.tools))
except Exception as e: except Exception as e:
logger.error(f"MCP server '{name}': failed to connect: {e}") logger.error("MCP server '{}': failed to connect: {}", name, e)

View File

@@ -26,7 +26,8 @@ class ExecTool(Tool):
r"\brm\s+-[rf]{1,2}\b", # rm -r, rm -rf, rm -fr r"\brm\s+-[rf]{1,2}\b", # rm -r, rm -rf, rm -fr
r"\bdel\s+/[fq]\b", # del /f, del /q r"\bdel\s+/[fq]\b", # del /f, del /q
r"\brmdir\s+/s\b", # rmdir /s r"\brmdir\s+/s\b", # rmdir /s
r"\b(format|mkfs|diskpart)\b", # disk operations r"(?:^|[;&|]\s*)format\b", # format (as standalone command only)
r"\b(mkfs|diskpart)\b", # disk operations
r"\bdd\s+if=", # dd r"\bdd\s+if=", # dd
r">\s*/dev/sd", # write to disk r">\s*/dev/sd", # write to disk
r"\b(shutdown|reboot|poweroff)\b", # system power r"\b(shutdown|reboot|poweroff)\b", # system power
@@ -81,6 +82,12 @@ class ExecTool(Tool):
) )
except asyncio.TimeoutError: except asyncio.TimeoutError:
process.kill() process.kill()
# Wait for the process to fully terminate so pipes are
# drained and file descriptors are released.
try:
await asyncio.wait_for(process.wait(), timeout=5.0)
except asyncio.TimeoutError:
pass
return f"Error: Command timed out after {self.timeout} seconds" return f"Error: Command timed out after {self.timeout} seconds"
output_parts = [] output_parts = []

View File

@@ -116,7 +116,7 @@ class WebFetchTool(Tool):
# Validate URL before fetching # Validate URL before fetching
is_valid, error_msg = _validate_url(url) is_valid, error_msg = _validate_url(url)
if not is_valid: if not is_valid:
return json.dumps({"error": f"URL validation failed: {error_msg}", "url": url}) return json.dumps({"error": f"URL validation failed: {error_msg}", "url": url}, ensure_ascii=False)
try: try:
async with httpx.AsyncClient( async with httpx.AsyncClient(
@@ -131,7 +131,7 @@ class WebFetchTool(Tool):
# JSON # JSON
if "application/json" in ctype: if "application/json" in ctype:
text, extractor = json.dumps(r.json(), indent=2), "json" text, extractor = json.dumps(r.json(), indent=2, ensure_ascii=False), "json"
# HTML # HTML
elif "text/html" in ctype or r.text[:256].lower().startswith(("<!doctype", "<html")): elif "text/html" in ctype or r.text[:256].lower().startswith(("<!doctype", "<html")):
doc = Document(r.text) doc = Document(r.text)
@@ -146,9 +146,9 @@ class WebFetchTool(Tool):
text = text[:max_chars] text = text[:max_chars]
return json.dumps({"url": url, "finalUrl": str(r.url), "status": r.status_code, return json.dumps({"url": url, "finalUrl": str(r.url), "status": r.status_code,
"extractor": extractor, "truncated": truncated, "length": len(text), "text": text}) "extractor": extractor, "truncated": truncated, "length": len(text), "text": text}, ensure_ascii=False)
except Exception as e: except Exception as e:
return json.dumps({"error": str(e), "url": url}) return json.dumps({"error": str(e), "url": url}, ensure_ascii=False)
def _to_markdown(self, html: str) -> str: def _to_markdown(self, html: str) -> str:
"""Convert HTML to markdown.""" """Convert HTML to markdown."""

View File

@@ -1,9 +1,6 @@
"""Async message queue for decoupled channel-agent communication.""" """Async message queue for decoupled channel-agent communication."""
import asyncio import asyncio
from typing import Callable, Awaitable
from loguru import logger
from nanobot.bus.events import InboundMessage, OutboundMessage from nanobot.bus.events import InboundMessage, OutboundMessage
@@ -19,8 +16,6 @@ class MessageBus:
def __init__(self): def __init__(self):
self.inbound: asyncio.Queue[InboundMessage] = asyncio.Queue() self.inbound: asyncio.Queue[InboundMessage] = asyncio.Queue()
self.outbound: asyncio.Queue[OutboundMessage] = asyncio.Queue() self.outbound: asyncio.Queue[OutboundMessage] = asyncio.Queue()
self._outbound_subscribers: dict[str, list[Callable[[OutboundMessage], Awaitable[None]]]] = {}
self._running = False
async def publish_inbound(self, msg: InboundMessage) -> None: async def publish_inbound(self, msg: InboundMessage) -> None:
"""Publish a message from a channel to the agent.""" """Publish a message from a channel to the agent."""
@@ -38,38 +33,6 @@ class MessageBus:
"""Consume the next outbound message (blocks until available).""" """Consume the next outbound message (blocks until available)."""
return await self.outbound.get() return await self.outbound.get()
def subscribe_outbound(
self,
channel: str,
callback: Callable[[OutboundMessage], Awaitable[None]]
) -> None:
"""Subscribe to outbound messages for a specific channel."""
if channel not in self._outbound_subscribers:
self._outbound_subscribers[channel] = []
self._outbound_subscribers[channel].append(callback)
async def dispatch_outbound(self) -> None:
"""
Dispatch outbound messages to subscribed channels.
Run this as a background task.
"""
self._running = True
while self._running:
try:
msg = await asyncio.wait_for(self.outbound.get(), timeout=1.0)
subscribers = self._outbound_subscribers.get(msg.channel, [])
for callback in subscribers:
try:
await callback(msg)
except Exception as e:
logger.error(f"Error dispatching to {msg.channel}: {e}")
except asyncio.TimeoutError:
continue
def stop(self) -> None:
"""Stop the dispatcher loop."""
self._running = False
@property @property
def inbound_size(self) -> int: def inbound_size(self) -> int:
"""Number of pending inbound messages.""" """Number of pending inbound messages."""

View File

@@ -65,7 +65,7 @@ class NanobotDingTalkHandler(CallbackHandler):
sender_id = chatbot_msg.sender_staff_id or chatbot_msg.sender_id sender_id = chatbot_msg.sender_staff_id or chatbot_msg.sender_id
sender_name = chatbot_msg.sender_nick or "Unknown" sender_name = chatbot_msg.sender_nick or "Unknown"
logger.info(f"Received DingTalk message from {sender_name} ({sender_id}): {content}") logger.info("Received DingTalk message from {} ({}): {}", sender_name, sender_id, content)
# Forward to Nanobot via _on_message (non-blocking). # Forward to Nanobot via _on_message (non-blocking).
# Store reference to prevent GC before task completes. # Store reference to prevent GC before task completes.
@@ -78,7 +78,7 @@ class NanobotDingTalkHandler(CallbackHandler):
return AckMessage.STATUS_OK, "OK" return AckMessage.STATUS_OK, "OK"
except Exception as e: except Exception as e:
logger.error(f"Error processing DingTalk message: {e}") logger.error("Error processing DingTalk message: {}", e)
# Return OK to avoid retry loop from DingTalk server # Return OK to avoid retry loop from DingTalk server
return AckMessage.STATUS_OK, "Error" return AckMessage.STATUS_OK, "Error"
@@ -142,13 +142,13 @@ class DingTalkChannel(BaseChannel):
try: try:
await self._client.start() await self._client.start()
except Exception as e: except Exception as e:
logger.warning(f"DingTalk stream error: {e}") logger.warning("DingTalk stream error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting DingTalk stream in 5 seconds...") logger.info("Reconnecting DingTalk stream in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
except Exception as e: except Exception as e:
logger.exception(f"Failed to start DingTalk channel: {e}") logger.exception("Failed to start DingTalk channel: {}", e)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop the DingTalk bot.""" """Stop the DingTalk bot."""
@@ -186,7 +186,7 @@ class DingTalkChannel(BaseChannel):
self._token_expiry = time.time() + int(res_data.get("expireIn", 7200)) - 60 self._token_expiry = time.time() + int(res_data.get("expireIn", 7200)) - 60
return self._access_token return self._access_token
except Exception as e: except Exception as e:
logger.error(f"Failed to get DingTalk access token: {e}") logger.error("Failed to get DingTalk access token: {}", e)
return None return None
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
@@ -208,7 +208,7 @@ class DingTalkChannel(BaseChannel):
"msgParam": json.dumps({ "msgParam": json.dumps({
"text": msg.content, "text": msg.content,
"title": "Nanobot Reply", "title": "Nanobot Reply",
}), }, ensure_ascii=False),
} }
if not self._http: if not self._http:
@@ -218,11 +218,11 @@ class DingTalkChannel(BaseChannel):
try: try:
resp = await self._http.post(url, json=data, headers=headers) resp = await self._http.post(url, json=data, headers=headers)
if resp.status_code != 200: if resp.status_code != 200:
logger.error(f"DingTalk send failed: {resp.text}") logger.error("DingTalk send failed: {}", resp.text)
else: else:
logger.debug(f"DingTalk message sent to {msg.chat_id}") logger.debug("DingTalk message sent to {}", msg.chat_id)
except Exception as e: except Exception as e:
logger.error(f"Error sending DingTalk message: {e}") logger.error("Error sending DingTalk message: {}", e)
async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None: async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None:
"""Handle incoming message (called by NanobotDingTalkHandler). """Handle incoming message (called by NanobotDingTalkHandler).
@@ -231,7 +231,7 @@ class DingTalkChannel(BaseChannel):
permission checks before publishing to the bus. permission checks before publishing to the bus.
""" """
try: try:
logger.info(f"DingTalk inbound: {content} from {sender_name}") logger.info("DingTalk inbound: {} from {}", content, sender_name)
await self._handle_message( await self._handle_message(
sender_id=sender_id, sender_id=sender_id,
chat_id=sender_id, # For private chat, chat_id == sender_id chat_id=sender_id, # For private chat, chat_id == sender_id
@@ -242,4 +242,4 @@ class DingTalkChannel(BaseChannel):
}, },
) )
except Exception as e: except Exception as e:
logger.error(f"Error publishing DingTalk message: {e}") logger.error("Error publishing DingTalk message: {}", e)

View File

@@ -51,7 +51,7 @@ class DiscordChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Discord gateway error: {e}") logger.warning("Discord gateway error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting to Discord gateway in 5 seconds...") logger.info("Reconnecting to Discord gateway in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
@@ -94,14 +94,14 @@ class DiscordChannel(BaseChannel):
if response.status_code == 429: if response.status_code == 429:
data = response.json() data = response.json()
retry_after = float(data.get("retry_after", 1.0)) retry_after = float(data.get("retry_after", 1.0))
logger.warning(f"Discord rate limited, retrying in {retry_after}s") logger.warning("Discord rate limited, retrying in {}s", retry_after)
await asyncio.sleep(retry_after) await asyncio.sleep(retry_after)
continue continue
response.raise_for_status() response.raise_for_status()
return return
except Exception as e: except Exception as e:
if attempt == 2: if attempt == 2:
logger.error(f"Error sending Discord message: {e}") logger.error("Error sending Discord message: {}", e)
else: else:
await asyncio.sleep(1) await asyncio.sleep(1)
finally: finally:
@@ -116,7 +116,7 @@ class DiscordChannel(BaseChannel):
try: try:
data = json.loads(raw) data = json.loads(raw)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning(f"Invalid JSON from Discord gateway: {raw[:100]}") logger.warning("Invalid JSON from Discord gateway: {}", raw[:100])
continue continue
op = data.get("op") op = data.get("op")
@@ -175,7 +175,7 @@ class DiscordChannel(BaseChannel):
try: try:
await self._ws.send(json.dumps(payload)) await self._ws.send(json.dumps(payload))
except Exception as e: except Exception as e:
logger.warning(f"Discord heartbeat failed: {e}") logger.warning("Discord heartbeat failed: {}", e)
break break
await asyncio.sleep(interval_s) await asyncio.sleep(interval_s)
@@ -219,7 +219,7 @@ class DiscordChannel(BaseChannel):
media_paths.append(str(file_path)) media_paths.append(str(file_path))
content_parts.append(f"[attachment: {file_path}]") content_parts.append(f"[attachment: {file_path}]")
except Exception as e: except Exception as e:
logger.warning(f"Failed to download Discord attachment: {e}") logger.warning("Failed to download Discord attachment: {}", e)
content_parts.append(f"[attachment: {filename} - download failed]") content_parts.append(f"[attachment: {filename} - download failed]")
reply_to = (payload.get("referenced_message") or {}).get("id") reply_to = (payload.get("referenced_message") or {}).get("id")

View File

@@ -94,7 +94,7 @@ class EmailChannel(BaseChannel):
metadata=item.get("metadata", {}), metadata=item.get("metadata", {}),
) )
except Exception as e: except Exception as e:
logger.error(f"Email polling error: {e}") logger.error("Email polling error: {}", e)
await asyncio.sleep(poll_seconds) await asyncio.sleep(poll_seconds)
@@ -143,7 +143,7 @@ class EmailChannel(BaseChannel):
try: try:
await asyncio.to_thread(self._smtp_send, email_msg) await asyncio.to_thread(self._smtp_send, email_msg)
except Exception as e: except Exception as e:
logger.error(f"Error sending email to {to_addr}: {e}") logger.error("Error sending email to {}: {}", to_addr, e)
raise raise
def _validate_config(self) -> bool: def _validate_config(self) -> bool:
@@ -162,7 +162,7 @@ class EmailChannel(BaseChannel):
missing.append("smtp_password") missing.append("smtp_password")
if missing: if missing:
logger.error(f"Email channel not configured, missing: {', '.join(missing)}") logger.error("Email channel not configured, missing: {}", ', '.join(missing))
return False return False
return True return True

View File

@@ -2,6 +2,7 @@
import asyncio import asyncio
import json import json
import os
import re import re
import threading import threading
from collections import OrderedDict from collections import OrderedDict
@@ -17,6 +18,10 @@ from nanobot.config.schema import FeishuConfig
try: try:
import lark_oapi as lark import lark_oapi as lark
from lark_oapi.api.im.v1 import ( from lark_oapi.api.im.v1 import (
CreateFileRequest,
CreateFileRequestBody,
CreateImageRequest,
CreateImageRequestBody,
CreateMessageRequest, CreateMessageRequest,
CreateMessageRequestBody, CreateMessageRequestBody,
CreateMessageReactionRequest, CreateMessageReactionRequest,
@@ -151,7 +156,7 @@ class FeishuChannel(BaseChannel):
try: try:
self._ws_client.start() self._ws_client.start()
except Exception as e: except Exception as e:
logger.warning(f"Feishu WebSocket error: {e}") logger.warning("Feishu WebSocket error: {}", e)
if self._running: if self._running:
import time; time.sleep(5) import time; time.sleep(5)
@@ -172,7 +177,7 @@ class FeishuChannel(BaseChannel):
try: try:
self._ws_client.stop() self._ws_client.stop()
except Exception as e: except Exception as e:
logger.warning(f"Error stopping WebSocket client: {e}") logger.warning("Error stopping WebSocket client: {}", e)
logger.info("Feishu bot stopped") logger.info("Feishu bot stopped")
def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None: def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None:
@@ -189,11 +194,11 @@ class FeishuChannel(BaseChannel):
response = self._client.im.v1.message_reaction.create(request) response = self._client.im.v1.message_reaction.create(request)
if not response.success(): if not response.success():
logger.warning(f"Failed to add reaction: code={response.code}, msg={response.msg}") logger.warning("Failed to add reaction: code={}, msg={}", response.code, response.msg)
else: else:
logger.debug(f"Added {emoji_type} reaction to message {message_id}") logger.debug("Added {} reaction to message {}", emoji_type, message_id)
except Exception as e: except Exception as e:
logger.warning(f"Error adding reaction: {e}") logger.warning("Error adding reaction: {}", e)
async def _add_reaction(self, message_id: str, emoji_type: str = "THUMBSUP") -> None: async def _add_reaction(self, message_id: str, emoji_type: str = "THUMBSUP") -> None:
""" """
@@ -263,7 +268,6 @@ class FeishuChannel(BaseChannel):
before = protected[last_end:m.start()].strip() before = protected[last_end:m.start()].strip()
if before: if before:
elements.append({"tag": "markdown", "content": before}) elements.append({"tag": "markdown", "content": before})
level = len(m.group(1))
text = m.group(2).strip() text = m.group(2).strip()
elements.append({ elements.append({
"tag": "div", "tag": "div",
@@ -284,50 +288,128 @@ class FeishuChannel(BaseChannel):
return elements or [{"tag": "markdown", "content": content}] return elements or [{"tag": "markdown", "content": content}]
_IMAGE_EXTS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".webp", ".ico", ".tiff", ".tif"}
_AUDIO_EXTS = {".opus"}
_FILE_TYPE_MAP = {
".opus": "opus", ".mp4": "mp4", ".pdf": "pdf", ".doc": "doc", ".docx": "doc",
".xls": "xls", ".xlsx": "xls", ".ppt": "ppt", ".pptx": "ppt",
}
def _upload_image_sync(self, file_path: str) -> str | None:
"""Upload an image to Feishu and return the image_key."""
try:
with open(file_path, "rb") as f:
request = CreateImageRequest.builder() \
.request_body(
CreateImageRequestBody.builder()
.image_type("message")
.image(f)
.build()
).build()
response = self._client.im.v1.image.create(request)
if response.success():
image_key = response.data.image_key
logger.debug("Uploaded image {}: {}", os.path.basename(file_path), image_key)
return image_key
else:
logger.error("Failed to upload image: code={}, msg={}", response.code, response.msg)
return None
except Exception as e:
logger.error("Error uploading image {}: {}", file_path, e)
return None
def _upload_file_sync(self, file_path: str) -> str | None:
"""Upload a file to Feishu and return the file_key."""
ext = os.path.splitext(file_path)[1].lower()
file_type = self._FILE_TYPE_MAP.get(ext, "stream")
file_name = os.path.basename(file_path)
try:
with open(file_path, "rb") as f:
request = CreateFileRequest.builder() \
.request_body(
CreateFileRequestBody.builder()
.file_type(file_type)
.file_name(file_name)
.file(f)
.build()
).build()
response = self._client.im.v1.file.create(request)
if response.success():
file_key = response.data.file_key
logger.debug("Uploaded file {}: {}", file_name, file_key)
return file_key
else:
logger.error("Failed to upload file: code={}, msg={}", response.code, response.msg)
return None
except Exception as e:
logger.error("Error uploading file {}: {}", file_path, e)
return None
def _send_message_sync(self, receive_id_type: str, receive_id: str, msg_type: str, content: str) -> bool:
"""Send a single message (text/image/file/interactive) synchronously."""
try:
request = CreateMessageRequest.builder() \
.receive_id_type(receive_id_type) \
.request_body(
CreateMessageRequestBody.builder()
.receive_id(receive_id)
.msg_type(msg_type)
.content(content)
.build()
).build()
response = self._client.im.v1.message.create(request)
if not response.success():
logger.error(
"Failed to send Feishu {} message: code={}, msg={}, log_id={}",
msg_type, response.code, response.msg, response.get_log_id()
)
return False
logger.debug("Feishu {} message sent to {}", msg_type, receive_id)
return True
except Exception as e:
logger.error("Error sending Feishu {} message: {}", msg_type, e)
return False
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
"""Send a message through Feishu.""" """Send a message through Feishu, including media (images/files) if present."""
if not self._client: if not self._client:
logger.warning("Feishu client not initialized") logger.warning("Feishu client not initialized")
return return
try: try:
# Determine receive_id_type based on chat_id format receive_id_type = "chat_id" if msg.chat_id.startswith("oc_") else "open_id"
# open_id starts with "ou_", chat_id starts with "oc_" loop = asyncio.get_running_loop()
if msg.chat_id.startswith("oc_"):
receive_id_type = "chat_id"
else:
receive_id_type = "open_id"
# Build card with markdown + table support for file_path in msg.media:
elements = self._build_card_elements(msg.content) if not os.path.isfile(file_path):
card = { logger.warning("Media file not found: {}", file_path)
"config": {"wide_screen_mode": True}, continue
"elements": elements, ext = os.path.splitext(file_path)[1].lower()
} if ext in self._IMAGE_EXTS:
content = json.dumps(card, ensure_ascii=False) key = await loop.run_in_executor(None, self._upload_image_sync, file_path)
if key:
await loop.run_in_executor(
None, self._send_message_sync,
receive_id_type, msg.chat_id, "image", json.dumps({"image_key": key}, ensure_ascii=False),
)
else:
key = await loop.run_in_executor(None, self._upload_file_sync, file_path)
if key:
media_type = "audio" if ext in self._AUDIO_EXTS else "file"
await loop.run_in_executor(
None, self._send_message_sync,
receive_id_type, msg.chat_id, media_type, json.dumps({"file_key": key}, ensure_ascii=False),
)
request = CreateMessageRequest.builder() \ if msg.content and msg.content.strip():
.receive_id_type(receive_id_type) \ card = {"config": {"wide_screen_mode": True}, "elements": self._build_card_elements(msg.content)}
.request_body( await loop.run_in_executor(
CreateMessageRequestBody.builder() None, self._send_message_sync,
.receive_id(msg.chat_id) receive_id_type, msg.chat_id, "interactive", json.dumps(card, ensure_ascii=False),
.msg_type("interactive")
.content(content)
.build()
).build()
response = self._client.im.v1.message.create(request)
if not response.success():
logger.error(
f"Failed to send Feishu message: code={response.code}, "
f"msg={response.msg}, log_id={response.get_log_id()}"
) )
else:
logger.debug(f"Feishu message sent to {msg.chat_id}")
except Exception as e: except Exception as e:
logger.error(f"Error sending Feishu message: {e}") logger.error("Error sending Feishu message: {}", e)
def _on_message_sync(self, data: "P2ImMessageReceiveV1") -> None: def _on_message_sync(self, data: "P2ImMessageReceiveV1") -> None:
""" """
@@ -399,4 +481,4 @@ class FeishuChannel(BaseChannel):
) )
except Exception as e: except Exception as e:
logger.error(f"Error processing Feishu message: {e}") logger.error("Error processing Feishu message: {}", e)

View File

@@ -45,7 +45,7 @@ class ChannelManager:
) )
logger.info("Telegram channel enabled") logger.info("Telegram channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Telegram channel not available: {e}") logger.warning("Telegram channel not available: {}", e)
# WhatsApp channel # WhatsApp channel
if self.config.channels.whatsapp.enabled: if self.config.channels.whatsapp.enabled:
@@ -56,7 +56,7 @@ class ChannelManager:
) )
logger.info("WhatsApp channel enabled") logger.info("WhatsApp channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"WhatsApp channel not available: {e}") logger.warning("WhatsApp channel not available: {}", e)
# Discord channel # Discord channel
if self.config.channels.discord.enabled: if self.config.channels.discord.enabled:
@@ -67,7 +67,7 @@ class ChannelManager:
) )
logger.info("Discord channel enabled") logger.info("Discord channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Discord channel not available: {e}") logger.warning("Discord channel not available: {}", e)
# Feishu channel # Feishu channel
if self.config.channels.feishu.enabled: if self.config.channels.feishu.enabled:
@@ -78,7 +78,7 @@ class ChannelManager:
) )
logger.info("Feishu channel enabled") logger.info("Feishu channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Feishu channel not available: {e}") logger.warning("Feishu channel not available: {}", e)
# Mochat channel # Mochat channel
if self.config.channels.mochat.enabled: if self.config.channels.mochat.enabled:
@@ -90,7 +90,7 @@ class ChannelManager:
) )
logger.info("Mochat channel enabled") logger.info("Mochat channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Mochat channel not available: {e}") logger.warning("Mochat channel not available: {}", e)
# DingTalk channel # DingTalk channel
if self.config.channels.dingtalk.enabled: if self.config.channels.dingtalk.enabled:
@@ -101,7 +101,7 @@ class ChannelManager:
) )
logger.info("DingTalk channel enabled") logger.info("DingTalk channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"DingTalk channel not available: {e}") logger.warning("DingTalk channel not available: {}", e)
# Email channel # Email channel
if self.config.channels.email.enabled: if self.config.channels.email.enabled:
@@ -112,7 +112,7 @@ class ChannelManager:
) )
logger.info("Email channel enabled") logger.info("Email channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Email channel not available: {e}") logger.warning("Email channel not available: {}", e)
# Slack channel # Slack channel
if self.config.channels.slack.enabled: if self.config.channels.slack.enabled:
@@ -123,7 +123,7 @@ class ChannelManager:
) )
logger.info("Slack channel enabled") logger.info("Slack channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"Slack channel not available: {e}") logger.warning("Slack channel not available: {}", e)
# QQ channel # QQ channel
if self.config.channels.qq.enabled: if self.config.channels.qq.enabled:
@@ -135,14 +135,14 @@ class ChannelManager:
) )
logger.info("QQ channel enabled") logger.info("QQ channel enabled")
except ImportError as e: except ImportError as e:
logger.warning(f"QQ channel not available: {e}") logger.warning("QQ channel not available: {}", e)
async def _start_channel(self, name: str, channel: BaseChannel) -> None: async def _start_channel(self, name: str, channel: BaseChannel) -> None:
"""Start a channel and log any exceptions.""" """Start a channel and log any exceptions."""
try: try:
await channel.start() await channel.start()
except Exception as e: except Exception as e:
logger.error(f"Failed to start channel {name}: {e}") logger.error("Failed to start channel {}: {}", name, e)
async def start_all(self) -> None: async def start_all(self) -> None:
"""Start all channels and the outbound dispatcher.""" """Start all channels and the outbound dispatcher."""
@@ -156,7 +156,7 @@ class ChannelManager:
# Start channels # Start channels
tasks = [] tasks = []
for name, channel in self.channels.items(): for name, channel in self.channels.items():
logger.info(f"Starting {name} channel...") logger.info("Starting {} channel...", name)
tasks.append(asyncio.create_task(self._start_channel(name, channel))) tasks.append(asyncio.create_task(self._start_channel(name, channel)))
# Wait for all to complete (they should run forever) # Wait for all to complete (they should run forever)
@@ -178,9 +178,9 @@ class ChannelManager:
for name, channel in self.channels.items(): for name, channel in self.channels.items():
try: try:
await channel.stop() await channel.stop()
logger.info(f"Stopped {name} channel") logger.info("Stopped {} channel", name)
except Exception as e: except Exception as e:
logger.error(f"Error stopping {name}: {e}") logger.error("Error stopping {}: {}", name, e)
async def _dispatch_outbound(self) -> None: async def _dispatch_outbound(self) -> None:
"""Dispatch outbound messages to the appropriate channel.""" """Dispatch outbound messages to the appropriate channel."""
@@ -198,9 +198,9 @@ class ChannelManager:
try: try:
await channel.send(msg) await channel.send(msg)
except Exception as e: except Exception as e:
logger.error(f"Error sending to {msg.channel}: {e}") logger.error("Error sending to {}: {}", msg.channel, e)
else: else:
logger.warning(f"Unknown channel: {msg.channel}") logger.warning("Unknown channel: {}", msg.channel)
except asyncio.TimeoutError: except asyncio.TimeoutError:
continue continue

View File

@@ -322,7 +322,7 @@ class MochatChannel(BaseChannel):
await self._api_send("/api/claw/sessions/send", "sessionId", target.id, await self._api_send("/api/claw/sessions/send", "sessionId", target.id,
content, msg.reply_to) content, msg.reply_to)
except Exception as e: except Exception as e:
logger.error(f"Failed to send Mochat message: {e}") logger.error("Failed to send Mochat message: {}", e)
# ---- config / init helpers --------------------------------------------- # ---- config / init helpers ---------------------------------------------
@@ -380,7 +380,7 @@ class MochatChannel(BaseChannel):
@client.event @client.event
async def connect_error(data: Any) -> None: async def connect_error(data: Any) -> None:
logger.error(f"Mochat websocket connect error: {data}") logger.error("Mochat websocket connect error: {}", data)
@client.on("claw.session.events") @client.on("claw.session.events")
async def on_session_events(payload: dict[str, Any]) -> None: async def on_session_events(payload: dict[str, Any]) -> None:
@@ -407,7 +407,7 @@ class MochatChannel(BaseChannel):
) )
return True return True
except Exception as e: except Exception as e:
logger.error(f"Failed to connect Mochat websocket: {e}") logger.error("Failed to connect Mochat websocket: {}", e)
try: try:
await client.disconnect() await client.disconnect()
except Exception: except Exception:
@@ -444,7 +444,7 @@ class MochatChannel(BaseChannel):
"limit": self.config.watch_limit, "limit": self.config.watch_limit,
}) })
if not ack.get("result"): if not ack.get("result"):
logger.error(f"Mochat subscribeSessions failed: {ack.get('message', 'unknown error')}") logger.error("Mochat subscribeSessions failed: {}", ack.get('message', 'unknown error'))
return False return False
data = ack.get("data") data = ack.get("data")
@@ -466,7 +466,7 @@ class MochatChannel(BaseChannel):
return True return True
ack = await self._socket_call("com.claw.im.subscribePanels", {"panelIds": panel_ids}) ack = await self._socket_call("com.claw.im.subscribePanels", {"panelIds": panel_ids})
if not ack.get("result"): if not ack.get("result"):
logger.error(f"Mochat subscribePanels failed: {ack.get('message', 'unknown error')}") logger.error("Mochat subscribePanels failed: {}", ack.get('message', 'unknown error'))
return False return False
return True return True
@@ -488,7 +488,7 @@ class MochatChannel(BaseChannel):
try: try:
await self._refresh_targets(subscribe_new=self._ws_ready) await self._refresh_targets(subscribe_new=self._ws_ready)
except Exception as e: except Exception as e:
logger.warning(f"Mochat refresh failed: {e}") logger.warning("Mochat refresh failed: {}", e)
if self._fallback_mode: if self._fallback_mode:
await self._ensure_fallback_workers() await self._ensure_fallback_workers()
@@ -502,7 +502,7 @@ class MochatChannel(BaseChannel):
try: try:
response = await self._post_json("/api/claw/sessions/list", {}) response = await self._post_json("/api/claw/sessions/list", {})
except Exception as e: except Exception as e:
logger.warning(f"Mochat listSessions failed: {e}") logger.warning("Mochat listSessions failed: {}", e)
return return
sessions = response.get("sessions") sessions = response.get("sessions")
@@ -536,7 +536,7 @@ class MochatChannel(BaseChannel):
try: try:
response = await self._post_json("/api/claw/groups/get", {}) response = await self._post_json("/api/claw/groups/get", {})
except Exception as e: except Exception as e:
logger.warning(f"Mochat getWorkspaceGroup failed: {e}") logger.warning("Mochat getWorkspaceGroup failed: {}", e)
return return
raw_panels = response.get("panels") raw_panels = response.get("panels")
@@ -598,7 +598,7 @@ class MochatChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Mochat watch fallback error ({session_id}): {e}") logger.warning("Mochat watch fallback error ({}): {}", session_id, e)
await asyncio.sleep(max(0.1, self.config.retry_delay_ms / 1000.0)) await asyncio.sleep(max(0.1, self.config.retry_delay_ms / 1000.0))
async def _panel_poll_worker(self, panel_id: str) -> None: async def _panel_poll_worker(self, panel_id: str) -> None:
@@ -625,7 +625,7 @@ class MochatChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.warning(f"Mochat panel polling error ({panel_id}): {e}") logger.warning("Mochat panel polling error ({}): {}", panel_id, e)
await asyncio.sleep(sleep_s) await asyncio.sleep(sleep_s)
# ---- inbound event processing ------------------------------------------ # ---- inbound event processing ------------------------------------------
@@ -836,7 +836,7 @@ class MochatChannel(BaseChannel):
try: try:
data = json.loads(self._cursor_path.read_text("utf-8")) data = json.loads(self._cursor_path.read_text("utf-8"))
except Exception as e: except Exception as e:
logger.warning(f"Failed to read Mochat cursor file: {e}") logger.warning("Failed to read Mochat cursor file: {}", e)
return return
cursors = data.get("cursors") if isinstance(data, dict) else None cursors = data.get("cursors") if isinstance(data, dict) else None
if isinstance(cursors, dict): if isinstance(cursors, dict):
@@ -852,7 +852,7 @@ class MochatChannel(BaseChannel):
"cursors": self._session_cursor, "cursors": self._session_cursor,
}, ensure_ascii=False, indent=2) + "\n", "utf-8") }, ensure_ascii=False, indent=2) + "\n", "utf-8")
except Exception as e: except Exception as e:
logger.warning(f"Failed to save Mochat cursor file: {e}") logger.warning("Failed to save Mochat cursor file: {}", e)
# ---- HTTP helpers ------------------------------------------------------ # ---- HTTP helpers ------------------------------------------------------

View File

@@ -34,7 +34,7 @@ def _make_bot_class(channel: "QQChannel") -> "type[botpy.Client]":
super().__init__(intents=intents) super().__init__(intents=intents)
async def on_ready(self): async def on_ready(self):
logger.info(f"QQ bot ready: {self.robot.name}") logger.info("QQ bot ready: {}", self.robot.name)
async def on_c2c_message_create(self, message: "C2CMessage"): async def on_c2c_message_create(self, message: "C2CMessage"):
await channel._on_message(message) await channel._on_message(message)
@@ -80,7 +80,7 @@ class QQChannel(BaseChannel):
try: try:
await self._client.start(appid=self.config.app_id, secret=self.config.secret) await self._client.start(appid=self.config.app_id, secret=self.config.secret)
except Exception as e: except Exception as e:
logger.warning(f"QQ bot error: {e}") logger.warning("QQ bot error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting QQ bot in 5 seconds...") logger.info("Reconnecting QQ bot in 5 seconds...")
await asyncio.sleep(5) await asyncio.sleep(5)
@@ -108,7 +108,7 @@ class QQChannel(BaseChannel):
content=msg.content, content=msg.content,
) )
except Exception as e: except Exception as e:
logger.error(f"Error sending QQ message: {e}") logger.error("Error sending QQ message: {}", e)
async def _on_message(self, data: "C2CMessage") -> None: async def _on_message(self, data: "C2CMessage") -> None:
"""Handle incoming message from QQ.""" """Handle incoming message from QQ."""
@@ -131,4 +131,4 @@ class QQChannel(BaseChannel):
metadata={"message_id": data.id}, metadata={"message_id": data.id},
) )
except Exception as e: except Exception as e:
logger.error(f"Error handling QQ message: {e}") logger.error("Error handling QQ message: {}", e)

View File

@@ -36,7 +36,7 @@ class SlackChannel(BaseChannel):
logger.error("Slack bot/app token not configured") logger.error("Slack bot/app token not configured")
return return
if self.config.mode != "socket": if self.config.mode != "socket":
logger.error(f"Unsupported Slack mode: {self.config.mode}") logger.error("Unsupported Slack mode: {}", self.config.mode)
return return
self._running = True self._running = True
@@ -53,9 +53,9 @@ class SlackChannel(BaseChannel):
try: try:
auth = await self._web_client.auth_test() auth = await self._web_client.auth_test()
self._bot_user_id = auth.get("user_id") self._bot_user_id = auth.get("user_id")
logger.info(f"Slack bot connected as {self._bot_user_id}") logger.info("Slack bot connected as {}", self._bot_user_id)
except Exception as e: except Exception as e:
logger.warning(f"Slack auth_test failed: {e}") logger.warning("Slack auth_test failed: {}", e)
logger.info("Starting Slack Socket Mode client...") logger.info("Starting Slack Socket Mode client...")
await self._socket_client.connect() await self._socket_client.connect()
@@ -70,7 +70,7 @@ class SlackChannel(BaseChannel):
try: try:
await self._socket_client.close() await self._socket_client.close()
except Exception as e: except Exception as e:
logger.warning(f"Slack socket close failed: {e}") logger.warning("Slack socket close failed: {}", e)
self._socket_client = None self._socket_client = None
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
@@ -90,7 +90,7 @@ class SlackChannel(BaseChannel):
thread_ts=thread_ts if use_thread else None, thread_ts=thread_ts if use_thread else None,
) )
except Exception as e: except Exception as e:
logger.error(f"Error sending Slack message: {e}") logger.error("Error sending Slack message: {}", e)
async def _on_socket_request( async def _on_socket_request(
self, self,
@@ -164,7 +164,7 @@ class SlackChannel(BaseChannel):
timestamp=event.get("ts"), timestamp=event.get("ts"),
) )
except Exception as e: except Exception as e:
logger.debug(f"Slack reactions_add failed: {e}") logger.debug("Slack reactions_add failed: {}", e)
await self._handle_message( await self._handle_message(
sender_id=sender_id, sender_id=sender_id,

View File

@@ -146,7 +146,7 @@ class TelegramChannel(BaseChannel):
# Add command handlers # Add command handlers
self._app.add_handler(CommandHandler("start", self._on_start)) self._app.add_handler(CommandHandler("start", self._on_start))
self._app.add_handler(CommandHandler("new", self._forward_command)) self._app.add_handler(CommandHandler("new", self._forward_command))
self._app.add_handler(CommandHandler("help", self._forward_command)) self._app.add_handler(CommandHandler("help", self._on_help))
# Add message handler for text, photos, voice, documents # Add message handler for text, photos, voice, documents
self._app.add_handler( self._app.add_handler(
@@ -165,13 +165,13 @@ class TelegramChannel(BaseChannel):
# Get bot info and register command menu # Get bot info and register command menu
bot_info = await self._app.bot.get_me() bot_info = await self._app.bot.get_me()
logger.info(f"Telegram bot @{bot_info.username} connected") logger.info("Telegram bot @{} connected", bot_info.username)
try: try:
await self._app.bot.set_my_commands(self.BOT_COMMANDS) await self._app.bot.set_my_commands(self.BOT_COMMANDS)
logger.debug("Telegram bot commands registered") logger.debug("Telegram bot commands registered")
except Exception as e: except Exception as e:
logger.warning(f"Failed to register bot commands: {e}") logger.warning("Failed to register bot commands: {}", e)
# Start polling (this runs until stopped) # Start polling (this runs until stopped)
await self._app.updater.start_polling( await self._app.updater.start_polling(
@@ -221,7 +221,7 @@ class TelegramChannel(BaseChannel):
try: try:
chat_id = int(msg.chat_id) chat_id = int(msg.chat_id)
except ValueError: except ValueError:
logger.error(f"Invalid chat_id: {msg.chat_id}") logger.error("Invalid chat_id: {}", msg.chat_id)
return return
# Send media files # Send media files
@@ -238,7 +238,7 @@ class TelegramChannel(BaseChannel):
await sender(chat_id=chat_id, **{param: f}) await sender(chat_id=chat_id, **{param: f})
except Exception as e: except Exception as e:
filename = media_path.rsplit("/", 1)[-1] filename = media_path.rsplit("/", 1)[-1]
logger.error(f"Failed to send media {media_path}: {e}") logger.error("Failed to send media {}: {}", media_path, e)
await self._app.bot.send_message(chat_id=chat_id, text=f"[Failed to send: {filename}]") await self._app.bot.send_message(chat_id=chat_id, text=f"[Failed to send: {filename}]")
# Send text content # Send text content
@@ -248,11 +248,11 @@ class TelegramChannel(BaseChannel):
html = _markdown_to_telegram_html(chunk) html = _markdown_to_telegram_html(chunk)
await self._app.bot.send_message(chat_id=chat_id, text=html, parse_mode="HTML") await self._app.bot.send_message(chat_id=chat_id, text=html, parse_mode="HTML")
except Exception as e: except Exception as e:
logger.warning(f"HTML parse failed, falling back to plain text: {e}") logger.warning("HTML parse failed, falling back to plain text: {}", e)
try: try:
await self._app.bot.send_message(chat_id=chat_id, text=chunk) await self._app.bot.send_message(chat_id=chat_id, text=chunk)
except Exception as e2: except Exception as e2:
logger.error(f"Error sending Telegram message: {e2}") logger.error("Error sending Telegram message: {}", e2)
async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle /start command.""" """Handle /start command."""
@@ -266,6 +266,16 @@ class TelegramChannel(BaseChannel):
"Type /help to see available commands." "Type /help to see available commands."
) )
async def _on_help(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle /help command, bypassing ACL so all users can access it."""
if not update.message:
return
await update.message.reply_text(
"🐈 nanobot commands:\n"
"/new — Start a new conversation\n"
"/help — Show available commands"
)
@staticmethod @staticmethod
def _sender_id(user) -> str: def _sender_id(user) -> str:
"""Build sender_id with username for allowlist matching.""" """Build sender_id with username for allowlist matching."""
@@ -344,21 +354,21 @@ class TelegramChannel(BaseChannel):
transcriber = GroqTranscriptionProvider(api_key=self.groq_api_key) transcriber = GroqTranscriptionProvider(api_key=self.groq_api_key)
transcription = await transcriber.transcribe(file_path) transcription = await transcriber.transcribe(file_path)
if transcription: if transcription:
logger.info(f"Transcribed {media_type}: {transcription[:50]}...") logger.info("Transcribed {}: {}...", media_type, transcription[:50])
content_parts.append(f"[transcription: {transcription}]") content_parts.append(f"[transcription: {transcription}]")
else: else:
content_parts.append(f"[{media_type}: {file_path}]") content_parts.append(f"[{media_type}: {file_path}]")
else: else:
content_parts.append(f"[{media_type}: {file_path}]") content_parts.append(f"[{media_type}: {file_path}]")
logger.debug(f"Downloaded {media_type} to {file_path}") logger.debug("Downloaded {} to {}", media_type, file_path)
except Exception as e: except Exception as e:
logger.error(f"Failed to download media: {e}") logger.error("Failed to download media: {}", e)
content_parts.append(f"[{media_type}: download failed]") content_parts.append(f"[{media_type}: download failed]")
content = "\n".join(content_parts) if content_parts else "[empty message]" content = "\n".join(content_parts) if content_parts else "[empty message]"
logger.debug(f"Telegram message from {sender_id}: {content[:50]}...") logger.debug("Telegram message from {}: {}...", sender_id, content[:50])
str_chat_id = str(chat_id) str_chat_id = str(chat_id)
@@ -401,11 +411,11 @@ class TelegramChannel(BaseChannel):
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass
except Exception as e: except Exception as e:
logger.debug(f"Typing indicator stopped for {chat_id}: {e}") logger.debug("Typing indicator stopped for {}: {}", chat_id, e)
async def _on_error(self, update: object, context: ContextTypes.DEFAULT_TYPE) -> None: async def _on_error(self, update: object, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Log polling / handler errors instead of silently swallowing them.""" """Log polling / handler errors instead of silently swallowing them."""
logger.error(f"Telegram error: {context.error}") logger.error("Telegram error: {}", context.error)
def _get_extension(self, media_type: str, mime_type: str | None) -> str: def _get_extension(self, media_type: str, mime_type: str | None) -> str:
"""Get file extension based on media type.""" """Get file extension based on media type."""

View File

@@ -34,7 +34,7 @@ class WhatsAppChannel(BaseChannel):
bridge_url = self.config.bridge_url bridge_url = self.config.bridge_url
logger.info(f"Connecting to WhatsApp bridge at {bridge_url}...") logger.info("Connecting to WhatsApp bridge at {}...", bridge_url)
self._running = True self._running = True
@@ -53,14 +53,14 @@ class WhatsAppChannel(BaseChannel):
try: try:
await self._handle_bridge_message(message) await self._handle_bridge_message(message)
except Exception as e: except Exception as e:
logger.error(f"Error handling bridge message: {e}") logger.error("Error handling bridge message: {}", e)
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
self._connected = False self._connected = False
self._ws = None self._ws = None
logger.warning(f"WhatsApp bridge connection error: {e}") logger.warning("WhatsApp bridge connection error: {}", e)
if self._running: if self._running:
logger.info("Reconnecting in 5 seconds...") logger.info("Reconnecting in 5 seconds...")
@@ -87,16 +87,16 @@ class WhatsAppChannel(BaseChannel):
"to": msg.chat_id, "to": msg.chat_id,
"text": msg.content "text": msg.content
} }
await self._ws.send(json.dumps(payload)) await self._ws.send(json.dumps(payload, ensure_ascii=False))
except Exception as e: except Exception as e:
logger.error(f"Error sending WhatsApp message: {e}") logger.error("Error sending WhatsApp message: {}", e)
async def _handle_bridge_message(self, raw: str) -> None: async def _handle_bridge_message(self, raw: str) -> None:
"""Handle a message from the bridge.""" """Handle a message from the bridge."""
try: try:
data = json.loads(raw) data = json.loads(raw)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning(f"Invalid JSON from bridge: {raw[:100]}") logger.warning("Invalid JSON from bridge: {}", raw[:100])
return return
msg_type = data.get("type") msg_type = data.get("type")
@@ -112,11 +112,11 @@ class WhatsAppChannel(BaseChannel):
# Extract just the phone number or lid as chat_id # Extract just the phone number or lid as chat_id
user_id = pn if pn else sender user_id = pn if pn else sender
sender_id = user_id.split("@")[0] if "@" in user_id else user_id sender_id = user_id.split("@")[0] if "@" in user_id else user_id
logger.info(f"Sender {sender}") logger.info("Sender {}", sender)
# Handle voice transcription if it's a voice message # Handle voice transcription if it's a voice message
if content == "[Voice Message]": if content == "[Voice Message]":
logger.info(f"Voice message received from {sender_id}, but direct download from bridge is not yet supported.") logger.info("Voice message received from {}, but direct download from bridge is not yet supported.", sender_id)
content = "[Voice Message: Transcription not available for WhatsApp yet]" content = "[Voice Message: Transcription not available for WhatsApp yet]"
await self._handle_message( await self._handle_message(
@@ -133,7 +133,7 @@ class WhatsAppChannel(BaseChannel):
elif msg_type == "status": elif msg_type == "status":
# Connection status update # Connection status update
status = data.get("status") status = data.get("status")
logger.info(f"WhatsApp status: {status}") logger.info("WhatsApp status: {}", status)
if status == "connected": if status == "connected":
self._connected = True self._connected = True
@@ -145,4 +145,4 @@ class WhatsAppChannel(BaseChannel):
logger.info("Scan QR code in the bridge terminal to connect WhatsApp") logger.info("Scan QR code in the bridge terminal to connect WhatsApp")
elif msg_type == "error": elif msg_type == "error":
logger.error(f"WhatsApp bridge error: {data.get('error')}") logger.error("WhatsApp bridge error: {}", data.get('error'))

View File

@@ -243,7 +243,7 @@ Information about the user goes here.
for filename, content in templates.items(): for filename, content in templates.items():
file_path = workspace / filename file_path = workspace / filename
if not file_path.exists(): if not file_path.exists():
file_path.write_text(content) file_path.write_text(content, encoding="utf-8")
console.print(f" [dim]Created {filename}[/dim]") console.print(f" [dim]Created {filename}[/dim]")
# Create memory directory and MEMORY.md # Create memory directory and MEMORY.md
@@ -266,12 +266,12 @@ This file stores important information that should persist across sessions.
## Important Notes ## Important Notes
(Things to remember) (Things to remember)
""") """, encoding="utf-8")
console.print(" [dim]Created memory/MEMORY.md[/dim]") console.print(" [dim]Created memory/MEMORY.md[/dim]")
history_file = memory_dir / "HISTORY.md" history_file = memory_dir / "HISTORY.md"
if not history_file.exists(): if not history_file.exists():
history_file.write_text("") history_file.write_text("", encoding="utf-8")
console.print(" [dim]Created memory/HISTORY.md[/dim]") console.print(" [dim]Created memory/HISTORY.md[/dim]")
# Create skills directory for custom user skills # Create skills directory for custom user skills
@@ -805,14 +805,18 @@ def cron_add(
store_path = get_data_dir() / "cron" / "jobs.json" store_path = get_data_dir() / "cron" / "jobs.json"
service = CronService(store_path) service = CronService(store_path)
job = service.add_job( try:
name=name, job = service.add_job(
schedule=schedule, name=name,
message=message, schedule=schedule,
deliver=deliver, message=message,
to=to, deliver=deliver,
channel=channel, to=to,
) channel=channel,
)
except ValueError as e:
console.print(f"[red]Error: {e}[/red]")
raise typer.Exit(1) from e
console.print(f"[green]✓[/green] Added job '{job.name}' ({job.id})") console.print(f"[green]✓[/green] Added job '{job.name}' ({job.id})")

View File

@@ -31,7 +31,7 @@ def load_config(config_path: Path | None = None) -> Config:
if path.exists(): if path.exists():
try: try:
with open(path) as f: with open(path, encoding="utf-8") as f:
data = json.load(f) data = json.load(f)
data = _migrate_config(data) data = _migrate_config(data)
return Config.model_validate(data) return Config.model_validate(data)
@@ -55,8 +55,8 @@ def save_config(config: Config, config_path: Path | None = None) -> None:
data = config.model_dump(by_alias=True) data = config.model_dump(by_alias=True)
with open(path, "w") as f: with open(path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2) json.dump(data, f, indent=2, ensure_ascii=False)
def _migrate_config(data: dict) -> dict: def _migrate_config(data: dict) -> dict:

View File

@@ -220,6 +220,7 @@ class ProvidersConfig(Base):
minimax: ProviderConfig = Field(default_factory=ProviderConfig) minimax: ProviderConfig = Field(default_factory=ProviderConfig)
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway
volcengine: ProviderConfig = Field(default_factory=ProviderConfig) # VolcEngine (火山引擎) API gateway
openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth) openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth)
github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth) github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth)
@@ -288,11 +289,25 @@ class Config(BaseSettings):
from nanobot.providers.registry import PROVIDERS from nanobot.providers.registry import PROVIDERS
model_lower = (model or self.agents.defaults.model).lower() model_lower = (model or self.agents.defaults.model).lower()
model_normalized = model_lower.replace("-", "_")
model_prefix = model_lower.split("/", 1)[0] if "/" in model_lower else ""
normalized_prefix = model_prefix.replace("-", "_")
def _kw_matches(kw: str) -> bool:
kw = kw.lower()
return kw in model_lower or kw.replace("-", "_") in model_normalized
# Explicit provider prefix wins — prevents `github-copilot/...codex` matching openai_codex.
for spec in PROVIDERS:
p = getattr(self.providers, spec.name, None)
if p and model_prefix and normalized_prefix == spec.name:
if spec.is_oauth or p.api_key:
return p, spec.name
# Match by keyword (order follows PROVIDERS registry) # Match by keyword (order follows PROVIDERS registry)
for spec in PROVIDERS: for spec in PROVIDERS:
p = getattr(self.providers, spec.name, None) p = getattr(self.providers, spec.name, None)
if p and any(kw in model_lower for kw in spec.keywords): if p and any(_kw_matches(kw) for kw in spec.keywords):
if spec.is_oauth or p.api_key: if spec.is_oauth or p.api_key:
return p, spec.name return p, spec.name

View File

@@ -45,6 +45,20 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
return None return None
def _validate_schedule_for_add(schedule: CronSchedule) -> None:
"""Validate schedule fields that would otherwise create non-runnable jobs."""
if schedule.tz and schedule.kind != "cron":
raise ValueError("tz can only be used with cron schedules")
if schedule.kind == "cron" and schedule.tz:
try:
from zoneinfo import ZoneInfo
ZoneInfo(schedule.tz)
except Exception:
raise ValueError(f"unknown timezone '{schedule.tz}'") from None
class CronService: class CronService:
"""Service for managing and executing scheduled jobs.""" """Service for managing and executing scheduled jobs."""
@@ -66,7 +80,7 @@ class CronService:
if self.store_path.exists(): if self.store_path.exists():
try: try:
data = json.loads(self.store_path.read_text()) data = json.loads(self.store_path.read_text(encoding="utf-8"))
jobs = [] jobs = []
for j in data.get("jobs", []): for j in data.get("jobs", []):
jobs.append(CronJob( jobs.append(CronJob(
@@ -99,7 +113,7 @@ class CronService:
)) ))
self._store = CronStore(jobs=jobs) self._store = CronStore(jobs=jobs)
except Exception as e: except Exception as e:
logger.warning(f"Failed to load cron store: {e}") logger.warning("Failed to load cron store: {}", e)
self._store = CronStore() self._store = CronStore()
else: else:
self._store = CronStore() self._store = CronStore()
@@ -148,7 +162,7 @@ class CronService:
] ]
} }
self.store_path.write_text(json.dumps(data, indent=2)) self.store_path.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
async def start(self) -> None: async def start(self) -> None:
"""Start the cron service.""" """Start the cron service."""
@@ -157,7 +171,7 @@ class CronService:
self._recompute_next_runs() self._recompute_next_runs()
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron service started with {len(self._store.jobs if self._store else [])} jobs") logger.info("Cron service started with {} jobs", len(self._store.jobs if self._store else []))
def stop(self) -> None: def stop(self) -> None:
"""Stop the cron service.""" """Stop the cron service."""
@@ -222,7 +236,7 @@ class CronService:
async def _execute_job(self, job: CronJob) -> None: async def _execute_job(self, job: CronJob) -> None:
"""Execute a single job.""" """Execute a single job."""
start_ms = _now_ms() start_ms = _now_ms()
logger.info(f"Cron: executing job '{job.name}' ({job.id})") logger.info("Cron: executing job '{}' ({})", job.name, job.id)
try: try:
response = None response = None
@@ -231,12 +245,12 @@ class CronService:
job.state.last_status = "ok" job.state.last_status = "ok"
job.state.last_error = None job.state.last_error = None
logger.info(f"Cron: job '{job.name}' completed") logger.info("Cron: job '{}' completed", job.name)
except Exception as e: except Exception as e:
job.state.last_status = "error" job.state.last_status = "error"
job.state.last_error = str(e) job.state.last_error = str(e)
logger.error(f"Cron: job '{job.name}' failed: {e}") logger.error("Cron: job '{}' failed: {}", job.name, e)
job.state.last_run_at_ms = start_ms job.state.last_run_at_ms = start_ms
job.updated_at_ms = _now_ms() job.updated_at_ms = _now_ms()
@@ -272,6 +286,7 @@ class CronService:
) -> CronJob: ) -> CronJob:
"""Add a new job.""" """Add a new job."""
store = self._load_store() store = self._load_store()
_validate_schedule_for_add(schedule)
now = _now_ms() now = _now_ms()
job = CronJob( job = CronJob(
@@ -296,7 +311,7 @@ class CronService:
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron: added job '{name}' ({job.id})") logger.info("Cron: added job '{}' ({})", name, job.id)
return job return job
def remove_job(self, job_id: str) -> bool: def remove_job(self, job_id: str) -> bool:
@@ -309,7 +324,7 @@ class CronService:
if removed: if removed:
self._save_store() self._save_store()
self._arm_timer() self._arm_timer()
logger.info(f"Cron: removed job {job_id}") logger.info("Cron: removed job {}", job_id)
return removed return removed

View File

@@ -65,7 +65,7 @@ class HeartbeatService:
"""Read HEARTBEAT.md content.""" """Read HEARTBEAT.md content."""
if self.heartbeat_file.exists(): if self.heartbeat_file.exists():
try: try:
return self.heartbeat_file.read_text() return self.heartbeat_file.read_text(encoding="utf-8")
except Exception: except Exception:
return None return None
return None return None
@@ -78,7 +78,7 @@ class HeartbeatService:
self._running = True self._running = True
self._task = asyncio.create_task(self._run_loop()) self._task = asyncio.create_task(self._run_loop())
logger.info(f"Heartbeat started (every {self.interval_s}s)") logger.info("Heartbeat started (every {}s)", self.interval_s)
def stop(self) -> None: def stop(self) -> None:
"""Stop the heartbeat service.""" """Stop the heartbeat service."""
@@ -97,7 +97,7 @@ class HeartbeatService:
except asyncio.CancelledError: except asyncio.CancelledError:
break break
except Exception as e: except Exception as e:
logger.error(f"Heartbeat error: {e}") logger.error("Heartbeat error: {}", e)
async def _tick(self) -> None: async def _tick(self) -> None:
"""Execute a single heartbeat tick.""" """Execute a single heartbeat tick."""
@@ -118,10 +118,10 @@ class HeartbeatService:
if HEARTBEAT_OK_TOKEN.replace("_", "") in response.upper().replace("_", ""): if HEARTBEAT_OK_TOKEN.replace("_", "") in response.upper().replace("_", ""):
logger.info("Heartbeat: OK (no action needed)") logger.info("Heartbeat: OK (no action needed)")
else: else:
logger.info(f"Heartbeat: completed task") logger.info("Heartbeat: completed task")
except Exception as e: except Exception as e:
logger.error(f"Heartbeat execution failed: {e}") logger.error("Heartbeat execution failed: {}", e)
async def trigger_now(self) -> str | None: async def trigger_now(self) -> str | None:
"""Manually trigger a heartbeat.""" """Manually trigger a heartbeat."""

View File

@@ -88,11 +88,55 @@ class LiteLLMProvider(LLMProvider):
# Standard mode: auto-prefix for known providers # Standard mode: auto-prefix for known providers
spec = find_by_model(model) spec = find_by_model(model)
if spec and spec.litellm_prefix: if spec and spec.litellm_prefix:
model = self._canonicalize_explicit_prefix(model, spec.name, spec.litellm_prefix)
if not any(model.startswith(s) for s in spec.skip_prefixes): if not any(model.startswith(s) for s in spec.skip_prefixes):
model = f"{spec.litellm_prefix}/{model}" model = f"{spec.litellm_prefix}/{model}"
return model return model
@staticmethod
def _canonicalize_explicit_prefix(model: str, spec_name: str, canonical_prefix: str) -> str:
"""Normalize explicit provider prefixes like `github-copilot/...`."""
if "/" not in model:
return model
prefix, remainder = model.split("/", 1)
if prefix.lower().replace("-", "_") != spec_name:
return model
return f"{canonical_prefix}/{remainder}"
def _supports_cache_control(self, model: str) -> bool:
"""Return True when the provider supports cache_control on content blocks."""
if self._gateway is not None:
return False
spec = find_by_model(model)
return spec is not None and spec.supports_prompt_caching
def _apply_cache_control(
self,
messages: list[dict[str, Any]],
tools: list[dict[str, Any]] | None,
) -> tuple[list[dict[str, Any]], list[dict[str, Any]] | None]:
"""Return copies of messages and tools with cache_control injected."""
new_messages = []
for msg in messages:
if msg.get("role") == "system":
content = msg["content"]
if isinstance(content, str):
new_content = [{"type": "text", "text": content, "cache_control": {"type": "ephemeral"}}]
else:
new_content = list(content)
new_content[-1] = {**new_content[-1], "cache_control": {"type": "ephemeral"}}
new_messages.append({**msg, "content": new_content})
else:
new_messages.append(msg)
new_tools = tools
if tools:
new_tools = list(tools)
new_tools[-1] = {**new_tools[-1], "cache_control": {"type": "ephemeral"}}
return new_messages, new_tools
def _apply_model_overrides(self, model: str, kwargs: dict[str, Any]) -> None: def _apply_model_overrides(self, model: str, kwargs: dict[str, Any]) -> None:
"""Apply model-specific parameter overrides from the registry.""" """Apply model-specific parameter overrides from the registry."""
model_lower = model.lower() model_lower = model.lower()
@@ -124,7 +168,11 @@ class LiteLLMProvider(LLMProvider):
Returns: Returns:
LLMResponse with content and/or tool calls. LLMResponse with content and/or tool calls.
""" """
model = self._resolve_model(model or self.default_model) original_model = model or self.default_model
model = self._resolve_model(original_model)
if self._supports_cache_control(original_model):
messages, tools = self._apply_cache_control(messages, tools)
# Clamp max_tokens to at least 1 — negative or zero values cause # Clamp max_tokens to at least 1 — negative or zero values cause
# LiteLLM to reject the request with "max_tokens must be at least 1". # LiteLLM to reject the request with "max_tokens must be at least 1".

View File

@@ -80,7 +80,7 @@ class OpenAICodexProvider(LLMProvider):
def _strip_model_prefix(model: str) -> str: def _strip_model_prefix(model: str) -> str:
if model.startswith("openai-codex/"): if model.startswith("openai-codex/") or model.startswith("openai_codex/"):
return model.split("/", 1)[1] return model.split("/", 1)[1]
return model return model
@@ -176,7 +176,7 @@ def _convert_messages(messages: list[dict[str, Any]]) -> tuple[str, list[dict[st
if role == "tool": if role == "tool":
call_id, _ = _split_tool_call_id(msg.get("tool_call_id")) call_id, _ = _split_tool_call_id(msg.get("tool_call_id"))
output_text = content if isinstance(content, str) else json.dumps(content) output_text = content if isinstance(content, str) else json.dumps(content, ensure_ascii=False)
input_items.append( input_items.append(
{ {
"type": "function_call_output", "type": "function_call_output",

View File

@@ -57,6 +57,9 @@ class ProviderSpec:
# Direct providers bypass LiteLLM entirely (e.g., CustomProvider) # Direct providers bypass LiteLLM entirely (e.g., CustomProvider)
is_direct: bool = False is_direct: bool = False
# Provider supports cache_control on content blocks (e.g. Anthropic prompt caching)
supports_prompt_caching: bool = False
@property @property
def label(self) -> str: def label(self) -> str:
return self.display_name or self.name.title() return self.display_name or self.name.title()
@@ -137,6 +140,24 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
model_overrides=(), model_overrides=(),
), ),
# VolcEngine (火山引擎): OpenAI-compatible gateway
ProviderSpec(
name="volcengine",
keywords=("volcengine", "volces", "ark"),
env_key="OPENAI_API_KEY",
display_name="VolcEngine",
litellm_prefix="openai",
skip_prefixes=(),
env_extras=(),
is_gateway=True,
is_local=False,
detect_by_key_prefix="",
detect_by_base_keyword="volces",
default_api_base="https://ark.cn-beijing.volces.com/api/v3",
strip_model_prefix=False,
model_overrides=(),
),
# === Standard providers (matched by model-name keywords) =============== # === Standard providers (matched by model-name keywords) ===============
# Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed. # Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed.
@@ -155,6 +176,7 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
default_api_base="", default_api_base="",
strip_model_prefix=False, strip_model_prefix=False,
model_overrides=(), model_overrides=(),
supports_prompt_caching=True,
), ),
# OpenAI: LiteLLM recognizes "gpt-*" natively, no prefix needed. # OpenAI: LiteLLM recognizes "gpt-*" natively, no prefix needed.
@@ -384,10 +406,18 @@ def find_by_model(model: str) -> ProviderSpec | None:
"""Match a standard provider by model-name keyword (case-insensitive). """Match a standard provider by model-name keyword (case-insensitive).
Skips gateways/local — those are matched by api_key/api_base instead.""" Skips gateways/local — those are matched by api_key/api_base instead."""
model_lower = model.lower() model_lower = model.lower()
for spec in PROVIDERS: model_normalized = model_lower.replace("-", "_")
if spec.is_gateway or spec.is_local: model_prefix = model_lower.split("/", 1)[0] if "/" in model_lower else ""
continue normalized_prefix = model_prefix.replace("-", "_")
if any(kw in model_lower for kw in spec.keywords): std_specs = [s for s in PROVIDERS if not s.is_gateway and not s.is_local]
# Prefer explicit provider prefix — prevents `github-copilot/...codex` matching openai_codex.
for spec in std_specs:
if model_prefix and normalized_prefix == spec.name:
return spec
for spec in std_specs:
if any(kw in model_lower or kw.replace("-", "_") in model_normalized for kw in spec.keywords):
return spec return spec
return None return None

View File

@@ -35,7 +35,7 @@ class GroqTranscriptionProvider:
path = Path(file_path) path = Path(file_path)
if not path.exists(): if not path.exists():
logger.error(f"Audio file not found: {file_path}") logger.error("Audio file not found: {}", file_path)
return "" return ""
try: try:
@@ -61,5 +61,5 @@ class GroqTranscriptionProvider:
return data.get("text", "") return data.get("text", "")
except Exception as e: except Exception as e:
logger.error(f"Groq transcription error: {e}") logger.error("Groq transcription error: {}", e)
return "" return ""

View File

@@ -110,7 +110,7 @@ class SessionManager:
if legacy_path.exists(): if legacy_path.exists():
import shutil import shutil
shutil.move(str(legacy_path), str(path)) shutil.move(str(legacy_path), str(path))
logger.info(f"Migrated session {key} from legacy path") logger.info("Migrated session {} from legacy path", key)
if not path.exists(): if not path.exists():
return None return None
@@ -121,7 +121,7 @@ class SessionManager:
created_at = None created_at = None
last_consolidated = 0 last_consolidated = 0
with open(path) as f: with open(path, encoding="utf-8") as f:
for line in f: for line in f:
line = line.strip() line = line.strip()
if not line: if not line:
@@ -144,14 +144,14 @@ class SessionManager:
last_consolidated=last_consolidated last_consolidated=last_consolidated
) )
except Exception as e: except Exception as e:
logger.warning(f"Failed to load session {key}: {e}") logger.warning("Failed to load session {}: {}", key, e)
return None return None
def save(self, session: Session) -> None: def save(self, session: Session) -> None:
"""Save a session to disk.""" """Save a session to disk."""
path = self._get_session_path(session.key) path = self._get_session_path(session.key)
with open(path, "w") as f: with open(path, "w", encoding="utf-8") as f:
metadata_line = { metadata_line = {
"_type": "metadata", "_type": "metadata",
"created_at": session.created_at.isoformat(), "created_at": session.created_at.isoformat(),
@@ -159,9 +159,9 @@ class SessionManager:
"metadata": session.metadata, "metadata": session.metadata,
"last_consolidated": session.last_consolidated "last_consolidated": session.last_consolidated
} }
f.write(json.dumps(metadata_line) + "\n") f.write(json.dumps(metadata_line, ensure_ascii=False) + "\n")
for msg in session.messages: for msg in session.messages:
f.write(json.dumps(msg) + "\n") f.write(json.dumps(msg, ensure_ascii=False) + "\n")
self._cache[session.key] = session self._cache[session.key] = session
@@ -181,7 +181,7 @@ class SessionManager:
for path in self.sessions_dir.glob("*.jsonl"): for path in self.sessions_dir.glob("*.jsonl"):
try: try:
# Read just the metadata line # Read just the metadata line
with open(path) as f: with open(path, encoding="utf-8") as f:
first_line = f.readline().strip() first_line = f.readline().strip()
if first_line: if first_line:
data = json.loads(first_line) data = json.loads(first_line)

View File

@@ -17,37 +17,37 @@ classifiers = [
] ]
dependencies = [ dependencies = [
"typer>=0.9.0", "typer>=0.20.0,<1.0.0",
"litellm>=1.0.0", "litellm>=1.81.5,<2.0.0",
"pydantic>=2.0.0", "pydantic>=2.12.0,<3.0.0",
"pydantic-settings>=2.0.0", "pydantic-settings>=2.12.0,<3.0.0",
"websockets>=12.0", "websockets>=16.0,<17.0",
"websocket-client>=1.6.0", "websocket-client>=1.9.0,<2.0.0",
"httpx>=0.25.0", "httpx>=0.28.0,<1.0.0",
"oauth-cli-kit>=0.1.1", "oauth-cli-kit>=0.1.3,<1.0.0",
"loguru>=0.7.0", "loguru>=0.7.3,<1.0.0",
"readability-lxml>=0.8.0", "readability-lxml>=0.8.4,<1.0.0",
"rich>=13.0.0", "rich>=14.0.0,<15.0.0",
"croniter>=2.0.0", "croniter>=6.0.0,<7.0.0",
"dingtalk-stream>=0.4.0", "dingtalk-stream>=0.24.0,<1.0.0",
"python-telegram-bot[socks]>=21.0", "python-telegram-bot[socks]>=22.0,<23.0",
"lark-oapi>=1.0.0", "lark-oapi>=1.5.0,<2.0.0",
"socksio>=1.0.0", "socksio>=1.0.0,<2.0.0",
"python-socketio>=5.11.0", "python-socketio>=5.16.0,<6.0.0",
"msgpack>=1.0.8", "msgpack>=1.1.0,<2.0.0",
"slack-sdk>=3.26.0", "slack-sdk>=3.39.0,<4.0.0",
"slackify-markdown>=0.2.0", "slackify-markdown>=0.2.0,<1.0.0",
"qq-botpy>=1.0.0", "qq-botpy>=1.2.0,<2.0.0",
"python-socks[asyncio]>=2.4.0", "python-socks[asyncio]>=2.8.0,<3.0.0",
"prompt-toolkit>=3.0.0", "prompt-toolkit>=3.0.50,<4.0.0",
"mcp>=1.0.0", "mcp>=1.26.0,<2.0.0",
"json-repair>=0.30.0", "json-repair>=0.57.0,<1.0.0",
] ]
[project.optional-dependencies] [project.optional-dependencies]
dev = [ dev = [
"pytest>=7.0.0", "pytest>=9.0.0,<10.0.0",
"pytest-asyncio>=0.21.0", "pytest-asyncio>=1.3.0,<2.0.0",
"ruff>=0.1.0", "ruff>=0.1.0",
] ]

View File

@@ -6,6 +6,10 @@ import pytest
from typer.testing import CliRunner from typer.testing import CliRunner
from nanobot.cli.commands import app from nanobot.cli.commands import app
from nanobot.config.schema import Config
from nanobot.providers.litellm_provider import LiteLLMProvider
from nanobot.providers.openai_codex_provider import _strip_model_prefix
from nanobot.providers.registry import find_by_model
runner = CliRunner() runner = CliRunner()
@@ -90,3 +94,37 @@ def test_onboard_existing_workspace_safe_create(mock_paths):
assert "Created workspace" not in result.stdout assert "Created workspace" not in result.stdout
assert "Created AGENTS.md" in result.stdout assert "Created AGENTS.md" in result.stdout
assert (workspace_dir / "AGENTS.md").exists() assert (workspace_dir / "AGENTS.md").exists()
def test_config_matches_github_copilot_codex_with_hyphen_prefix():
config = Config()
config.agents.defaults.model = "github-copilot/gpt-5.3-codex"
assert config.get_provider_name() == "github_copilot"
def test_config_matches_openai_codex_with_hyphen_prefix():
config = Config()
config.agents.defaults.model = "openai-codex/gpt-5.1-codex"
assert config.get_provider_name() == "openai_codex"
def test_find_by_model_prefers_explicit_prefix_over_generic_codex_keyword():
spec = find_by_model("github-copilot/gpt-5.3-codex")
assert spec is not None
assert spec.name == "github_copilot"
def test_litellm_provider_canonicalizes_github_copilot_hyphen_prefix():
provider = LiteLLMProvider(default_model="github-copilot/gpt-5.3-codex")
resolved = provider._resolve_model("github-copilot/gpt-5.3-codex")
assert resolved == "github_copilot/gpt-5.3-codex"
def test_openai_codex_strip_prefix_supports_hyphen_and_underscore():
assert _strip_model_prefix("openai-codex/gpt-5.1-codex") == "gpt-5.1-codex"
assert _strip_model_prefix("openai_codex/gpt-5.1-codex") == "gpt-5.1-codex"

View File

@@ -0,0 +1,29 @@
from typer.testing import CliRunner
from nanobot.cli.commands import app
runner = CliRunner()
def test_cron_add_rejects_invalid_timezone(monkeypatch, tmp_path) -> None:
monkeypatch.setattr("nanobot.config.loader.get_data_dir", lambda: tmp_path)
result = runner.invoke(
app,
[
"cron",
"add",
"--name",
"demo",
"--message",
"hello",
"--cron",
"0 9 * * *",
"--tz",
"America/Vancovuer",
],
)
assert result.exit_code == 1
assert "Error: unknown timezone 'America/Vancovuer'" in result.stdout
assert not (tmp_path / "cron" / "jobs.json").exists()

View File

@@ -0,0 +1,30 @@
import pytest
from nanobot.cron.service import CronService
from nanobot.cron.types import CronSchedule
def test_add_job_rejects_unknown_timezone(tmp_path) -> None:
service = CronService(tmp_path / "cron" / "jobs.json")
with pytest.raises(ValueError, match="unknown timezone 'America/Vancovuer'"):
service.add_job(
name="tz typo",
schedule=CronSchedule(kind="cron", expr="0 9 * * *", tz="America/Vancovuer"),
message="hello",
)
assert service.list_jobs(include_disabled=True) == []
def test_add_job_accepts_valid_timezone(tmp_path) -> None:
service = CronService(tmp_path / "cron" / "jobs.json")
job = service.add_job(
name="tz ok",
schedule=CronSchedule(kind="cron", expr="0 9 * * *", tz="America/Vancouver"),
message="hello",
)
assert job.schedule.tz == "America/Vancouver"
assert job.state.next_run_at_ms is not None