Merge branch 'main' into main

This commit is contained in:
Jack Lu
2026-03-01 13:35:24 +08:00
committed by GitHub
15 changed files with 289 additions and 79 deletions

View File

@@ -16,16 +16,24 @@
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines. ⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
📏 Real-time line count: **3,922 lines** (run `bash core_agent_lines.sh` to verify anytime) 📏 Real-time line count: **3,935 lines** (run `bash core_agent_lines.sh` to verify anytime)
## 📢 News ## 📢 News
- **2026-02-28** 🚀 Released **v0.1.4.post3** — cleaner context, hardened session history, and smarter agent. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post3) for details.
- **2026-02-27** 🧠 Experimental thinking mode support, DingTalk media messages, Feishu and QQ channel fixes.
- **2026-02-26** 🛡️ Session poisoning fix, WhatsApp dedup, Windows path guard, Mistral compatibility.
- **2026-02-25** 🧹 New Matrix channel, cleaner session context, auto workspace template sync.
- **2026-02-24** 🚀 Released **v0.1.4.post2** — a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post2) for details. - **2026-02-24** 🚀 Released **v0.1.4.post2** — a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post2) for details.
- **2026-02-23** 🔧 Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes. - **2026-02-23** 🔧 Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes.
- **2026-02-22** 🛡️ Slack thread isolation, Discord typing fix, agent reliability improvements. - **2026-02-22** 🛡️ Slack thread isolation, Discord typing fix, agent reliability improvements.
- **2026-02-21** 🎉 Released **v0.1.4.post1** — new providers, media support across channels, and major stability improvements. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post1) for details. - **2026-02-21** 🎉 Released **v0.1.4.post1** — new providers, media support across channels, and major stability improvements. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post1) for details.
- **2026-02-20** 🐦 Feishu now receives multimodal files from users. More reliable memory under the hood. - **2026-02-20** 🐦 Feishu now receives multimodal files from users. More reliable memory under the hood.
- **2026-02-19** ✨ Slack now sends files, Discord splits long messages, and subagents work in CLI mode. - **2026-02-19** ✨ Slack now sends files, Discord splits long messages, and subagents work in CLI mode.
<details>
<summary>Earlier news</summary>
- **2026-02-18** ⚡️ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching. - **2026-02-18** ⚡️ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching.
- **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details. - **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details.
- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills. - **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills.
@@ -34,10 +42,6 @@
- **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details. - **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it! - **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support! - **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!
<details>
<summary>Earlier news</summary>
- **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431). - **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms! - **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers). - **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).

View File

@@ -2,5 +2,5 @@
nanobot - A lightweight AI agent framework nanobot - A lightweight AI agent framework
""" """
__version__ = "0.1.4.post2" __version__ = "0.1.4.post3"
__logo__ = "🐈" __logo__ = "🐈"

View File

@@ -150,6 +150,7 @@ Reply directly with text for conversations. Only use the 'message' tool to send
content: str | None, content: str | None,
tool_calls: list[dict[str, Any]] | None = None, tool_calls: list[dict[str, Any]] | None = None,
reasoning_content: str | None = None, reasoning_content: str | None = None,
thinking_blocks: list[dict] | None = None,
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
"""Add an assistant message to the message list.""" """Add an assistant message to the message list."""
msg: dict[str, Any] = {"role": "assistant", "content": content} msg: dict[str, Any] = {"role": "assistant", "content": content}
@@ -157,5 +158,7 @@ Reply directly with text for conversations. Only use the 'message' tool to send
msg["tool_calls"] = tool_calls msg["tool_calls"] = tool_calls
if reasoning_content is not None: if reasoning_content is not None:
msg["reasoning_content"] = reasoning_content msg["reasoning_content"] = reasoning_content
if thinking_blocks:
msg["thinking_blocks"] = thinking_blocks
messages.append(msg) messages.append(msg)
return messages return messages

View File

@@ -56,6 +56,7 @@ class AgentLoop:
temperature: float = 0.1, temperature: float = 0.1,
max_tokens: int = 4096, max_tokens: int = 4096,
memory_window: int = 100, memory_window: int = 100,
reasoning_effort: str | None = None,
brave_api_key: str | None = None, brave_api_key: str | None = None,
exec_config: ExecToolConfig | None = None, exec_config: ExecToolConfig | None = None,
cron_service: CronService | None = None, cron_service: CronService | None = None,
@@ -74,6 +75,7 @@ class AgentLoop:
self.temperature = temperature self.temperature = temperature
self.max_tokens = max_tokens self.max_tokens = max_tokens
self.memory_window = memory_window self.memory_window = memory_window
self.reasoning_effort = reasoning_effort
self.brave_api_key = brave_api_key self.brave_api_key = brave_api_key
self.exec_config = exec_config or ExecToolConfig() self.exec_config = exec_config or ExecToolConfig()
self.cron_service = cron_service self.cron_service = cron_service
@@ -89,6 +91,7 @@ class AgentLoop:
model=self.model, model=self.model,
temperature=self.temperature, temperature=self.temperature,
max_tokens=self.max_tokens, max_tokens=self.max_tokens,
reasoning_effort=reasoning_effort,
brave_api_key=brave_api_key, brave_api_key=brave_api_key,
exec_config=self.exec_config, exec_config=self.exec_config,
restrict_to_workspace=restrict_to_workspace, restrict_to_workspace=restrict_to_workspace,
@@ -191,6 +194,7 @@ class AgentLoop:
model=self.model, model=self.model,
temperature=self.temperature, temperature=self.temperature,
max_tokens=self.max_tokens, max_tokens=self.max_tokens,
reasoning_effort=self.reasoning_effort,
) )
if response.has_tool_calls: if response.has_tool_calls:
@@ -214,6 +218,7 @@ class AgentLoop:
messages = self.context.add_assistant_message( messages = self.context.add_assistant_message(
messages, response.content, tool_call_dicts, messages, response.content, tool_call_dicts,
reasoning_content=response.reasoning_content, reasoning_content=response.reasoning_content,
thinking_blocks=response.thinking_blocks,
) )
for tool_call in response.tool_calls: for tool_call in response.tool_calls:
@@ -234,6 +239,7 @@ class AgentLoop:
break break
messages = self.context.add_assistant_message( messages = self.context.add_assistant_message(
messages, clean, reasoning_content=response.reasoning_content, messages, clean, reasoning_content=response.reasoning_content,
thinking_blocks=response.thinking_blocks,
) )
final_content = clean final_content = clean
break break
@@ -447,7 +453,7 @@ class AgentLoop:
"""Save new-turn messages into session, truncating large tool results.""" """Save new-turn messages into session, truncating large tool results."""
from datetime import datetime from datetime import datetime
for m in messages[skip:]: for m in messages[skip:]:
entry = {k: v for k, v in m.items() if k != "reasoning_content"} entry = dict(m)
role, content = entry.get("role"), entry.get("content") role, content = entry.get("role"), entry.get("content")
if role == "assistant" and not content and not entry.get("tool_calls"): if role == "assistant" and not content and not entry.get("tool_calls"):
continue # skip empty assistant messages — they poison session context continue # skip empty assistant messages — they poison session context

View File

@@ -29,6 +29,7 @@ class SubagentManager:
model: str | None = None, model: str | None = None,
temperature: float = 0.7, temperature: float = 0.7,
max_tokens: int = 4096, max_tokens: int = 4096,
reasoning_effort: str | None = None,
brave_api_key: str | None = None, brave_api_key: str | None = None,
exec_config: "ExecToolConfig | None" = None, exec_config: "ExecToolConfig | None" = None,
restrict_to_workspace: bool = False, restrict_to_workspace: bool = False,
@@ -40,6 +41,7 @@ class SubagentManager:
self.model = model or provider.get_default_model() self.model = model or provider.get_default_model()
self.temperature = temperature self.temperature = temperature
self.max_tokens = max_tokens self.max_tokens = max_tokens
self.reasoning_effort = reasoning_effort
self.brave_api_key = brave_api_key self.brave_api_key = brave_api_key
self.exec_config = exec_config or ExecToolConfig() self.exec_config = exec_config or ExecToolConfig()
self.restrict_to_workspace = restrict_to_workspace self.restrict_to_workspace = restrict_to_workspace
@@ -104,9 +106,8 @@ class SubagentManager:
)) ))
tools.register(WebSearchTool(api_key=self.brave_api_key)) tools.register(WebSearchTool(api_key=self.brave_api_key))
tools.register(WebFetchTool()) tools.register(WebFetchTool())
# Build messages with subagent-specific prompt system_prompt = self._build_subagent_prompt()
system_prompt = self._build_subagent_prompt(task)
messages: list[dict[str, Any]] = [ messages: list[dict[str, Any]] = [
{"role": "system", "content": system_prompt}, {"role": "system", "content": system_prompt},
{"role": "user", "content": task}, {"role": "user", "content": task},
@@ -126,6 +127,7 @@ class SubagentManager:
model=self.model, model=self.model,
temperature=self.temperature, temperature=self.temperature,
max_tokens=self.max_tokens, max_tokens=self.max_tokens,
reasoning_effort=self.reasoning_effort,
) )
if response.has_tool_calls: if response.has_tool_calls:
@@ -204,44 +206,29 @@ Summarize this naturally for the user. Keep it brief (1-2 sentences). Do not men
await self.bus.publish_inbound(msg) await self.bus.publish_inbound(msg)
logger.debug("Subagent [{}] announced result to {}:{}", task_id, origin['channel'], origin['chat_id']) logger.debug("Subagent [{}] announced result to {}:{}", task_id, origin['channel'], origin['chat_id'])
def _build_subagent_prompt(self, task: str) -> str: def _build_subagent_prompt(self) -> str:
"""Build a focused system prompt for the subagent.""" """Build a focused system prompt for the subagent."""
import time as _time from nanobot.agent.context import ContextBuilder
from datetime import datetime from nanobot.agent.skills import SkillsLoader
now = datetime.now().strftime("%Y-%m-%d %H:%M (%A)")
tz = _time.strftime("%Z") or "UTC"
return f"""# Subagent time_ctx = ContextBuilder._build_runtime_context(None, None)
parts = [f"""# Subagent
## Current Time {time_ctx}
{now} ({tz})
You are a subagent spawned by the main agent to complete a specific task. You are a subagent spawned by the main agent to complete a specific task.
Stay focused on the assigned task. Your final response will be reported back to the main agent.
## Rules
1. Stay focused - complete only the assigned task, nothing else
2. Your final response will be reported back to the main agent
3. Do not initiate conversations or take on side tasks
4. Be concise but informative in your findings
## What You Can Do
- Read and write files in the workspace
- Execute shell commands
- Search the web and fetch web pages
- Complete the task thoroughly
## What You Cannot Do
- Send messages directly to users (no message tool available)
- Spawn other subagents
- Access the main agent's conversation history
## Workspace ## Workspace
Your workspace is at: {self.workspace} {self.workspace}"""]
Skills are available at: {self.workspace}/skills/ (read SKILL.md files as needed)
When you have completed the task, provide a clear summary of your findings or actions.""" skills_summary = SkillsLoader(self.workspace).build_skills_summary()
if skills_summary:
parts.append(f"## Skills\n\nRead SKILL.md with read_file to use a skill.\n\n{skills_summary}")
return "\n\n".join(parts)
async def cancel_by_session(self, session_key: str) -> int: async def cancel_by_session(self, session_key: str) -> int:
"""Cancel all subagents for the given session. Returns count cancelled.""" """Cancel all subagents for the given session. Returns count cancelled."""
tasks = [self._running_tasks[tid] for tid in self._session_tasks.get(session_key, []) tasks = [self._running_tasks[tid] for tid in self._session_tasks.get(session_key, [])

View File

@@ -2,8 +2,12 @@
import asyncio import asyncio
import json import json
import mimetypes
import os
import time import time
from pathlib import Path
from typing import Any from typing import Any
from urllib.parse import unquote, urlparse
import httpx import httpx
from loguru import logger from loguru import logger
@@ -96,6 +100,9 @@ class DingTalkChannel(BaseChannel):
""" """
name = "dingtalk" name = "dingtalk"
_IMAGE_EXTS = {".jpg", ".jpeg", ".png", ".gif", ".bmp", ".webp"}
_AUDIO_EXTS = {".amr", ".mp3", ".wav", ".ogg", ".m4a", ".aac"}
_VIDEO_EXTS = {".mp4", ".mov", ".avi", ".mkv", ".webm"}
def __init__(self, config: DingTalkConfig, bus: MessageBus): def __init__(self, config: DingTalkConfig, bus: MessageBus):
super().__init__(config, bus) super().__init__(config, bus)
@@ -191,40 +198,224 @@ class DingTalkChannel(BaseChannel):
logger.error("Failed to get DingTalk access token: {}", e) logger.error("Failed to get DingTalk access token: {}", e)
return None return None
@staticmethod
def _is_http_url(value: str) -> bool:
return urlparse(value).scheme in ("http", "https")
def _guess_upload_type(self, media_ref: str) -> str:
ext = Path(urlparse(media_ref).path).suffix.lower()
if ext in self._IMAGE_EXTS: return "image"
if ext in self._AUDIO_EXTS: return "voice"
if ext in self._VIDEO_EXTS: return "video"
return "file"
def _guess_filename(self, media_ref: str, upload_type: str) -> str:
name = os.path.basename(urlparse(media_ref).path)
return name or {"image": "image.jpg", "voice": "audio.amr", "video": "video.mp4"}.get(upload_type, "file.bin")
async def _read_media_bytes(
self,
media_ref: str,
) -> tuple[bytes | None, str | None, str | None]:
if not media_ref:
return None, None, None
if self._is_http_url(media_ref):
if not self._http:
return None, None, None
try:
resp = await self._http.get(media_ref, follow_redirects=True)
if resp.status_code >= 400:
logger.warning(
"DingTalk media download failed status={} ref={}",
resp.status_code,
media_ref,
)
return None, None, None
content_type = (resp.headers.get("content-type") or "").split(";")[0].strip()
filename = self._guess_filename(media_ref, self._guess_upload_type(media_ref))
return resp.content, filename, content_type or None
except Exception as e:
logger.error("DingTalk media download error ref={} err={}", media_ref, e)
return None, None, None
try:
if media_ref.startswith("file://"):
parsed = urlparse(media_ref)
local_path = Path(unquote(parsed.path))
else:
local_path = Path(os.path.expanduser(media_ref))
if not local_path.is_file():
logger.warning("DingTalk media file not found: {}", local_path)
return None, None, None
data = await asyncio.to_thread(local_path.read_bytes)
content_type = mimetypes.guess_type(local_path.name)[0]
return data, local_path.name, content_type
except Exception as e:
logger.error("DingTalk media read error ref={} err={}", media_ref, e)
return None, None, None
async def _upload_media(
self,
token: str,
data: bytes,
media_type: str,
filename: str,
content_type: str | None,
) -> str | None:
if not self._http:
return None
url = f"https://oapi.dingtalk.com/media/upload?access_token={token}&type={media_type}"
mime = content_type or mimetypes.guess_type(filename)[0] or "application/octet-stream"
files = {"media": (filename, data, mime)}
try:
resp = await self._http.post(url, files=files)
text = resp.text
result = resp.json() if resp.headers.get("content-type", "").startswith("application/json") else {}
if resp.status_code >= 400:
logger.error("DingTalk media upload failed status={} type={} body={}", resp.status_code, media_type, text[:500])
return None
errcode = result.get("errcode", 0)
if errcode != 0:
logger.error("DingTalk media upload api error type={} errcode={} body={}", media_type, errcode, text[:500])
return None
sub = result.get("result") or {}
media_id = result.get("media_id") or result.get("mediaId") or sub.get("media_id") or sub.get("mediaId")
if not media_id:
logger.error("DingTalk media upload missing media_id body={}", text[:500])
return None
return str(media_id)
except Exception as e:
logger.error("DingTalk media upload error type={} err={}", media_type, e)
return None
async def _send_batch_message(
self,
token: str,
chat_id: str,
msg_key: str,
msg_param: dict[str, Any],
) -> bool:
if not self._http:
logger.warning("DingTalk HTTP client not initialized, cannot send")
return False
url = "https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend"
headers = {"x-acs-dingtalk-access-token": token}
payload = {
"robotCode": self.config.client_id,
"userIds": [chat_id],
"msgKey": msg_key,
"msgParam": json.dumps(msg_param, ensure_ascii=False),
}
try:
resp = await self._http.post(url, json=payload, headers=headers)
body = resp.text
if resp.status_code != 200:
logger.error("DingTalk send failed msgKey={} status={} body={}", msg_key, resp.status_code, body[:500])
return False
try: result = resp.json()
except Exception: result = {}
errcode = result.get("errcode")
if errcode not in (None, 0):
logger.error("DingTalk send api error msgKey={} errcode={} body={}", msg_key, errcode, body[:500])
return False
logger.debug("DingTalk message sent to {} with msgKey={}", chat_id, msg_key)
return True
except Exception as e:
logger.error("Error sending DingTalk message msgKey={} err={}", msg_key, e)
return False
async def _send_markdown_text(self, token: str, chat_id: str, content: str) -> bool:
return await self._send_batch_message(
token,
chat_id,
"sampleMarkdown",
{"text": content, "title": "Nanobot Reply"},
)
async def _send_media_ref(self, token: str, chat_id: str, media_ref: str) -> bool:
media_ref = (media_ref or "").strip()
if not media_ref:
return True
upload_type = self._guess_upload_type(media_ref)
if upload_type == "image" and self._is_http_url(media_ref):
ok = await self._send_batch_message(
token,
chat_id,
"sampleImageMsg",
{"photoURL": media_ref},
)
if ok:
return True
logger.warning("DingTalk image url send failed, trying upload fallback: {}", media_ref)
data, filename, content_type = await self._read_media_bytes(media_ref)
if not data:
logger.error("DingTalk media read failed: {}", media_ref)
return False
filename = filename or self._guess_filename(media_ref, upload_type)
file_type = Path(filename).suffix.lower().lstrip(".")
if not file_type:
guessed = mimetypes.guess_extension(content_type or "")
file_type = (guessed or ".bin").lstrip(".")
if file_type == "jpeg":
file_type = "jpg"
media_id = await self._upload_media(
token=token,
data=data,
media_type=upload_type,
filename=filename,
content_type=content_type,
)
if not media_id:
return False
if upload_type == "image":
# Verified in production: sampleImageMsg accepts media_id in photoURL.
ok = await self._send_batch_message(
token,
chat_id,
"sampleImageMsg",
{"photoURL": media_id},
)
if ok:
return True
logger.warning("DingTalk image media_id send failed, falling back to file: {}", media_ref)
return await self._send_batch_message(
token,
chat_id,
"sampleFile",
{"mediaId": media_id, "fileName": filename, "fileType": file_type},
)
async def send(self, msg: OutboundMessage) -> None: async def send(self, msg: OutboundMessage) -> None:
"""Send a message through DingTalk.""" """Send a message through DingTalk."""
token = await self._get_access_token() token = await self._get_access_token()
if not token: if not token:
return return
# oToMessages/batchSend: sends to individual users (private chat) if msg.content and msg.content.strip():
# https://open.dingtalk.com/document/orgapp/robot-batch-send-messages await self._send_markdown_text(token, msg.chat_id, msg.content.strip())
url = "https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend"
headers = {"x-acs-dingtalk-access-token": token} for media_ref in msg.media or []:
ok = await self._send_media_ref(token, msg.chat_id, media_ref)
data = { if ok:
"robotCode": self.config.client_id, continue
"userIds": [msg.chat_id], # chat_id is the user's staffId logger.error("DingTalk media send failed for {}", media_ref)
"msgKey": "sampleMarkdown", # Send visible fallback so failures are observable by the user.
"msgParam": json.dumps({ filename = self._guess_filename(media_ref, self._guess_upload_type(media_ref))
"text": msg.content, await self._send_markdown_text(
"title": "Nanobot Reply", token,
}, ensure_ascii=False), msg.chat_id,
} f"[Attachment send failed: {filename}]",
)
if not self._http:
logger.warning("DingTalk HTTP client not initialized, cannot send")
return
try:
resp = await self._http.post(url, json=data, headers=headers)
if resp.status_code != 200:
logger.error("DingTalk send failed: {}", resp.text)
else:
logger.debug("DingTalk message sent to {}", msg.chat_id)
except Exception as e:
logger.error("Error sending DingTalk message: {}", e)
async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None: async def _on_message(self, content: str, sender_id: str, sender_name: str) -> None:
"""Handle incoming message (called by NanobotDingTalkHandler). """Handle incoming message (called by NanobotDingTalkHandler).

View File

@@ -326,13 +326,14 @@ class FeishuChannel(BaseChannel):
await asyncio.sleep(1) await asyncio.sleep(1)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop the Feishu bot.""" """
Stop the Feishu bot.
Notice: lark.ws.Client does not expose stop method simply exiting the program will close the client.
Reference: https://github.com/larksuite/oapi-sdk-python/blob/v2_main/lark_oapi/ws/client.py#L86
"""
self._running = False self._running = False
if self._ws_client:
try:
self._ws_client.stop()
except Exception as e:
logger.warning("Error stopping WebSocket client: {}", e)
logger.info("Feishu bot stopped") logger.info("Feishu bot stopped")
def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None: def _add_reaction_sync(self, message_id: str, emoji_type: str) -> None:

View File

@@ -31,7 +31,8 @@ def _make_bot_class(channel: "QQChannel") -> "type[botpy.Client]":
class _Bot(botpy.Client): class _Bot(botpy.Client):
def __init__(self): def __init__(self):
super().__init__(intents=intents) # Disable botpy's file log — nanobot uses loguru; default "botpy.log" fails on read-only fs
super().__init__(intents=intents, ext_handlers=False)
async def on_ready(self): async def on_ready(self):
logger.info("QQ bot ready: {}", self.robot.name) logger.info("QQ bot ready: {}", self.robot.name)

View File

@@ -282,6 +282,7 @@ def gateway(
max_tokens=config.agents.defaults.max_tokens, max_tokens=config.agents.defaults.max_tokens,
max_iterations=config.agents.defaults.max_tool_iterations, max_iterations=config.agents.defaults.max_tool_iterations,
memory_window=config.agents.defaults.memory_window, memory_window=config.agents.defaults.memory_window,
reasoning_effort=config.agents.defaults.reasoning_effort,
brave_api_key=config.tools.web.search.api_key or None, brave_api_key=config.tools.web.search.api_key or None,
exec_config=config.tools.exec, exec_config=config.tools.exec,
cron_service=cron, cron_service=cron,
@@ -441,6 +442,7 @@ def agent(
max_tokens=config.agents.defaults.max_tokens, max_tokens=config.agents.defaults.max_tokens,
max_iterations=config.agents.defaults.max_tool_iterations, max_iterations=config.agents.defaults.max_tool_iterations,
memory_window=config.agents.defaults.memory_window, memory_window=config.agents.defaults.memory_window,
reasoning_effort=config.agents.defaults.reasoning_effort,
brave_api_key=config.tools.web.search.api_key or None, brave_api_key=config.tools.web.search.api_key or None,
exec_config=config.tools.exec, exec_config=config.tools.exec,
cron_service=cron, cron_service=cron,
@@ -934,6 +936,7 @@ def cron_run(
max_tokens=config.agents.defaults.max_tokens, max_tokens=config.agents.defaults.max_tokens,
max_iterations=config.agents.defaults.max_tool_iterations, max_iterations=config.agents.defaults.max_tool_iterations,
memory_window=config.agents.defaults.memory_window, memory_window=config.agents.defaults.memory_window,
reasoning_effort=config.agents.defaults.reasoning_effort,
brave_api_key=config.tools.web.search.api_key or None, brave_api_key=config.tools.web.search.api_key or None,
exec_config=config.tools.exec, exec_config=config.tools.exec,
restrict_to_workspace=config.tools.restrict_to_workspace, restrict_to_workspace=config.tools.restrict_to_workspace,

View File

@@ -226,6 +226,7 @@ class AgentDefaults(Base):
temperature: float = 0.1 temperature: float = 0.1
max_tool_iterations: int = 40 max_tool_iterations: int = 40
memory_window: int = 100 memory_window: int = 100
reasoning_effort: str | None = None # low / medium / high — enables LLM thinking mode
class AgentsConfig(Base): class AgentsConfig(Base):

View File

@@ -21,7 +21,8 @@ class LLMResponse:
finish_reason: str = "stop" finish_reason: str = "stop"
usage: dict[str, int] = field(default_factory=dict) usage: dict[str, int] = field(default_factory=dict)
reasoning_content: str | None = None # Kimi, DeepSeek-R1 etc. reasoning_content: str | None = None # Kimi, DeepSeek-R1 etc.
thinking_blocks: list[dict] | None = None # Anthropic extended thinking
@property @property
def has_tool_calls(self) -> bool: def has_tool_calls(self) -> bool:
"""Check if response contains tool calls.""" """Check if response contains tool calls."""
@@ -88,6 +89,7 @@ class LLMProvider(ABC):
model: str | None = None, model: str | None = None,
max_tokens: int = 4096, max_tokens: int = 4096,
temperature: float = 0.7, temperature: float = 0.7,
reasoning_effort: str | None = None,
) -> LLMResponse: ) -> LLMResponse:
""" """
Send a chat completion request. Send a chat completion request.

View File

@@ -18,13 +18,16 @@ class CustomProvider(LLMProvider):
self._client = AsyncOpenAI(api_key=api_key, base_url=api_base) self._client = AsyncOpenAI(api_key=api_key, base_url=api_base)
async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None, async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7) -> LLMResponse: model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7,
reasoning_effort: str | None = None) -> LLMResponse:
kwargs: dict[str, Any] = { kwargs: dict[str, Any] = {
"model": model or self.default_model, "model": model or self.default_model,
"messages": self._sanitize_empty_content(messages), "messages": self._sanitize_empty_content(messages),
"max_tokens": max(1, max_tokens), "max_tokens": max(1, max_tokens),
"temperature": temperature, "temperature": temperature,
} }
if reasoning_effort:
kwargs["reasoning_effort"] = reasoning_effort
if tools: if tools:
kwargs.update(tools=tools, tool_choice="auto") kwargs.update(tools=tools, tool_choice="auto")
try: try:

View File

@@ -14,7 +14,7 @@ from nanobot.providers.registry import find_by_model, find_gateway
# Standard OpenAI chat-completion message keys plus reasoning_content for # Standard OpenAI chat-completion message keys plus reasoning_content for
# thinking-enabled models (Kimi k2.5, DeepSeek-R1, etc.). # thinking-enabled models (Kimi k2.5, DeepSeek-R1, etc.).
_ALLOWED_MSG_KEYS = frozenset({"role", "content", "tool_calls", "tool_call_id", "name", "reasoning_content"}) _ALLOWED_MSG_KEYS = frozenset({"role", "content", "tool_calls", "tool_call_id", "name", "reasoning_content", "thinking_blocks"})
_ALNUM = string.ascii_letters + string.digits _ALNUM = string.ascii_letters + string.digits
def _short_tool_id() -> str: def _short_tool_id() -> str:
@@ -176,6 +176,7 @@ class LiteLLMProvider(LLMProvider):
model: str | None = None, model: str | None = None,
max_tokens: int = 4096, max_tokens: int = 4096,
temperature: float = 0.7, temperature: float = 0.7,
reasoning_effort: str | None = None,
) -> LLMResponse: ) -> LLMResponse:
""" """
Send a chat completion request via LiteLLM. Send a chat completion request via LiteLLM.
@@ -221,7 +222,11 @@ class LiteLLMProvider(LLMProvider):
# Pass extra headers (e.g. APP-Code for AiHubMix) # Pass extra headers (e.g. APP-Code for AiHubMix)
if self.extra_headers: if self.extra_headers:
kwargs["extra_headers"] = self.extra_headers kwargs["extra_headers"] = self.extra_headers
if reasoning_effort:
kwargs["reasoning_effort"] = reasoning_effort
kwargs["drop_params"] = True
if tools: if tools:
kwargs["tools"] = tools kwargs["tools"] = tools
kwargs["tool_choice"] = "auto" kwargs["tool_choice"] = "auto"
@@ -264,13 +269,15 @@ class LiteLLMProvider(LLMProvider):
} }
reasoning_content = getattr(message, "reasoning_content", None) or None reasoning_content = getattr(message, "reasoning_content", None) or None
thinking_blocks = getattr(message, "thinking_blocks", None) or None
return LLMResponse( return LLMResponse(
content=message.content, content=message.content,
tool_calls=tool_calls, tool_calls=tool_calls,
finish_reason=choice.finish_reason or "stop", finish_reason=choice.finish_reason or "stop",
usage=usage, usage=usage,
reasoning_content=reasoning_content, reasoning_content=reasoning_content,
thinking_blocks=thinking_blocks,
) )
def get_default_model(self) -> str: def get_default_model(self) -> str:

View File

@@ -31,6 +31,7 @@ class OpenAICodexProvider(LLMProvider):
model: str | None = None, model: str | None = None,
max_tokens: int = 4096, max_tokens: int = 4096,
temperature: float = 0.7, temperature: float = 0.7,
reasoning_effort: str | None = None,
) -> LLMResponse: ) -> LLMResponse:
model = model or self.default_model model = model or self.default_model
system_prompt, input_items = _convert_messages(messages) system_prompt, input_items = _convert_messages(messages)

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "nanobot-ai" name = "nanobot-ai"
version = "0.1.4.post2" version = "0.1.4.post3"
description = "A lightweight personal AI assistant framework" description = "A lightweight personal AI assistant framework"
requires-python = ">=3.11" requires-python = ">=3.11"
license = {text = "MIT"} license = {text = "MIT"}