Conversation
Update docstring to accurately reflect that reply messages are supported in history parsing rather than forward messages. Add comment explaining that forward messages only store a simple marker since AI can use get_forward_msg tool to view full content.
Add comprehensive MCP (Model Context Protocol) compatibility layer to
enable connecting external MCP servers and extending AI capabilities.
Core Implementation:
- Add MCPToolSetRegistry class to manage MCP server connections and
tool registration
- Integrate MCP toolsets into ToolRegistry with async initialization
and graceful degradation
- Implement lazy loading via background tasks to avoid blocking
startup
- Support tool naming convention: mcp.{server_name}.{tool_name}
Configuration:
- Add MCP_CONFIG_PATH environment variable to .env.example
- Create config/mcp.json.example with sample server configurations
(filesystem, brave-search, sqlite, github, postgres)
- Update .gitignore to exclude config/mcp.json
Dependencies:
- Add fastmcp>=2.14.4 dependency for MCP client functionality
Integration:
- Update AIClient to initialize MCP toolsets asynchronously on startup
- Add proper cleanup in AIClient.close() to terminate MCP connections
- Update ToolRegistry to load, initialize, and close MCP toolsets
Documentation:
- Add MCP support section to README.md with configuration guide
- Create comprehensive MCP toolset documentation with usage examples
and troubleshooting guide
Features:
- Automatic tool discovery and registration from MCP servers
- Graceful error handling when MCP servers are unavailable
- Detailed logging for debugging and monitoring
- Support for multiple concurrent MCP servers
Move MCP configuration documentation from the "Extension and Development" section to the "Configuration" section for better user experience. Changes: - Relocate MCP configuration guide to appear alongside other configuration instructions - Remove duplicate MCP support section from development section - Improve documentation structure by placing configuration information where users naturally look for it This change makes it easier for users to find MCP configuration instructions during the initial setup process.
Add a collapsible table of contents at the beginning of README.md to improve navigation and user experience. Changes: - Add collapsible TOC using HTML <details> and <summary> tags - Include all main sections and subsections with anchor links - Style with bold title and clear hierarchy - Default to collapsed state to reduce visual clutter - Add separator lines above and below for better visual separation Benefits: - Easier navigation for long documentation - Quick access to specific sections - Improved readability without expanding content by default
Add comprehensive validation for MCP server configuration format and provide clearer error messages to help users diagnose configuration issues. Changes: - Add type checking for server_config in _initialize_server method - Validate mcpServers array elements are dictionaries before processing - Provide detailed error messages explaining correct configuration format - Prevent crashes when config file contains invalid data types - Log configuration errors with specific index information This fix resolves AttributeError when mcpServers array contains strings instead of dictionary objects, which was causing the bot to crash during MCP initialization.
Fix MCP configuration format to match FastMCP's expected format.
FastMCP expects mcpServers as an object (dict) with server names as keys,
not as an array.
Changes:
- Update initialize() to handle mcpServers as dict instead of array
- Extract server_name from dict keys instead of name field
- Update config/mcp.json.example with correct object format
- Update all documentation examples in MCP README.md
- Update error messages to reflect correct format
Correct format:
{
"mcpServers": {
"server_name": {
"command": "npx",
"args": ["-y", "@package/name"]
}
}
}
This fixes "No MCP servers defined in the config" error when using
FastMCP Client with MCPConfigTransport.
Refactor MCP registry to use a single FastMCP Client with the full configuration dictionary instead of creating separate clients for each server. This matches FastMCP's recommended usage pattern. Changes: - Remove per-server client management (_mcp_clients, _server_tools) - Use Client(config) with full mcpServers configuration - Tools are automatically prefixed by FastMCP - Simplify tool registration and execution logic - Remove _initialize_server method (no longer needed) Benefits: - Matches FastMCP's documented usage pattern - Simpler code with fewer moving parts - Better resource management (single client connection) - Automatic tool prefixing by FastMCP
Change MCP tool name format to use dot separator instead of underscore. FastMCP auto-prefixes tools with server_name_tool_name format, but we want mcp.server_name.tool_name format. Changes: - Replace first underscore in tool_name with dot - Example: context7_resolve-library-id → mcp.context7.resolve-library-id - Update execute_tool docstring to reflect new format This provides cleaner, more consistent tool naming that matches the toolset naming convention used elsewhere in the project.
Fix tool name parsing to correctly extract server name and tool name from FastMCP's server_name_tool_name format. Changes: - Store mcp_servers config for prefix matching - Parse tool names to extract server_name and tool_name - Use correct format for schema (mcp.server_name.tool_name) - Use correct format for FastMCP calls (server_name_tool_name) - Fix indentation error in __init__ method Example: - FastMCP tool name: context7_resolve-library-id - Schema name: mcp.context7.resolve-library-id - Call name: context7_resolve-library-id
Fix tool naming to correctly handle FastMCP's behavior where: - Single server: tool names don't have prefix (e.g., resolve-library-id) - Multi server: tool names have prefix (e.g., context7_resolve-library-id) Changes: - Check if tool name contains server prefix - For single server without prefix, use that server name - For multi server with prefix, extract server and tool names - Always use mcp.server_name.tool_name format for schema - Use original tool name (with or without prefix) for FastMCP calls Example: - Single server: resolve-library-id → mcp.context7.resolve-library-id - Multi server: context7_resolve-library-id → mcp.context7.resolve-library-id This fix was discovered through local testing with uv.
Move tool summary display to show after MCP tools are loaded. Changes: - Extract tool summary logic into _log_tools_summary() method - Show initial summary without MCP tools during load_tools() - Show complete summary with MCP tools after initialize_mcp_toolsets() - Add 'MCP tools: (waiting for async initialization...)' status in initial summary Benefits: - Users see complete tool count including MCP tools after initialization - Clear indication that MCP tools are loading asynchronously - Better visibility of when MCP tools become available
Add example configuration for howtocook-mcp server to MCP config file. Also standardize indentation from 2 to 4 spaces and ensure proper trailing newline.
Add howtocook-mcp server configuration example to MCP toolset README documentation. Changes: - Add howtocook-mcp usage example - Update available servers list with more categories This helps users discover and configure more MCP servers.
Reorganize MCP documentation to improve readability and reduce redundancy. Changes: - Consolidate available MCP servers section into "Built-in Available MCP Servers" - Simplify server list to show only context7 and howtocook examples - Add recommendation to visit mcp.so for discovering more servers - Move configuration format section before examples for better flow - Collapse configuration examples into <details> tag to reduce visual clutter - Remove redundant tool usage descriptions from mcp/README.md - Add trailing period to warning note for consistency Benefits: - Cleaner, more focused documentation - Easier for users to find essential information - Reduced redundancy between main README and toolset README - Better visual hierarchy with collapsible sections
Restructure the tool loading summary output to only display when MCP tools are present and included. This prevents unnecessary logging when MCP tools are not available. Changes: - Move entire logging block inside 'if mcp_tools and include_mcp:' condition - Comment out redundant conditional checks for MCP tools display - Ensure summary only appears when MCP tools are actually loaded Benefits: - Cleaner log output when MCP tools are not configured - Eliminates redundant conditional logic - More precise control over when tool statistics are displayed
Fix RuntimeError when calling MCP tools after initialization. The client connection was being closed immediately after the context manager block ended, causing subsequent tool calls to fail with 'Client is not connected' error. Changes: - Replace async with context manager with manual __aenter__() call in initialize() - Manually call __aexit__() in close() method to properly close connection - Keep client connection active throughout the application lifecycle Benefits: - MCP tools can be called successfully after initialization - Connection persists until explicit close() is called - Fixes tool execution failures in production environment
69gg
added a commit
that referenced
this pull request
Jan 24, 2026
引入对 MCP 协议的初步支持,大幅增强了 Bot 的扩展性: 1. 核心功能: - 实现 MCP 协议的基础通信接口 (基于 fastmcp)。 - 支持动态挂载外部工具/服务 (如 filesystem, sqlite, brave-search 等)。 - 实现 MCP Server 的异步初始化与连接保持。 2. 配置与文档: - 新增 `config/mcp.json` 配置支持。 - 完善 MCP 相关的文档与配置示例。 3. 版本发布: - 升级版本号至 2.3.0。
69gg
added a commit
that referenced
this pull request
Apr 19, 2026
1. Meme reanalyze now uses GIF multi-frame analysis (Flag #1): _process_reanalyze_job checks record.is_animated + gif_analysis_mode and calls _prepare_gif_multi_frames, matching the ingest code path. Frame files are cleaned up in all exit paths. 2. Repeat cooldown dict no longer grows unboundedly (Flag #17): _record_repeat_cooldown now prunes expired entries on each insert, preventing slow memory leak from accumulated unique texts per group. 3. GIF frame files cleaned up on retryable LLM errors (Flag #25): Both judge and describe stages in the ingest path now clean up multi-frame temp files before re-raising retryable exceptions. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
69gg
added a commit
that referenced
this pull request
Apr 19, 2026
* feat(easter_egg): add repeat and inverted question mark features
- Add repeat_enabled and inverted_question_enabled config fields under [easter_egg]
- Implement group chat repeat: auto-repeat when 3 consecutive identical messages from different senders
- Inverted question mark: send ¿ instead of ? when repeat triggers on question-mark-only messages
- Race-condition protection via per-group asyncio.Lock
- Inject easter egg status into AI prompt context (model config info + system behavior)
- Update config.toml.example and docs/configuration.md
- Add tests for config loading (5), handler logic (12), and prompt injection (10)
* docs(deployment): prioritize source deployment over pip/uv tool
- Reorder sections: source deployment first, pip/uv tool second
- Update intro to state source deployment is the recommended primary method
- Add warning note on pip/uv tool section about incomplete support and testing
- Move Management-first flow into source deployment section
- Fix cross-references to point upward instead of downward
* fix(meme): 表情包动图判定改进与更好的重试
* docs: expand usage guide and enforce mandatory LaTeX dependency
- render_latex: remove mathtext fallback, enforce usetex=True strictly;
add _strip_document_wrappers to handle \begin{document} input;
catch RuntimeError and return helpful install prompt on missing TeX
- docs/build.md: add mandatory system LaTeX installation section with
per-platform commands (Debian/Arch/macOS/Windows) and verification steps
- docs/deployment.md: integrate LaTeX install steps into source deployment
workflow as step 3; add reminder callout in pip/uv tool section
- docs/usage.md: full rewrite with complete feature reference covering
all Agents, Toolsets, Tools, scheduler modes, FAQ commands, slash
command permission table, and multi-model pool
- tests/test_render_latex_tool.py: add 4 tests covering wrapper stripping,
successful embed delivery, and missing TeX error handling
* feat: attachment hash dedup, unified tags, LaTeX MathJax refactor, meme auto-match
- Attachment hash dedup: same scope+kind+SHA256 returns existing record
- Unified <attachment> tag: routes image/file by UID prefix, backward-compat <pic>
- Centralized dispatch_pending_file_sends() for non-image file delivery
- LaTeX rendering: migrate from matplotlib to MathJax + Playwright (no system TeX)
- LaTeX: support PNG and PDF output via output_format parameter
- Meme auto-match: annotate incoming images with meme descriptions by SHA256
- Update both prompt XML files with unified attachment tag documentation
- 37 new tests (713 total, all passing)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(render_latex): pass proxy config to Playwright for CDN access
Read use_proxy/http_proxy/https_proxy from runtime_config and forward
to chromium.launch(proxy=...) so MathJax CDN loads correctly on
servers requiring a proxy. Also update use_proxy comment to be generic.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(config): sync_config_template 报告注释变更路径;gif_analysis_mode 下拉选择
- ConfigTemplateSyncResult 新增 updated_comment_paths 字段
- sync_config_text() 对比 current/example 注释,记录有改动的路径
- 脚本以 ~ 前缀展示注释更新项数量和路径列表
- 修复测试 mock 对象缺少新字段
- config-form.js:gif_analysis_mode 渲染为 grid/multi 下拉框
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(repeat): bot 发言不计入复读链;阈值可配置
- bot 消息写入 counter(而非过滤)使窗口对其可见
- 触发条件增加 bot_qq not in senders 检查
覆盖三种情形:bot 先发、bot 中间插入、bot 滑出窗口后正常触发
- RuntimeConfig 新增 repeat_threshold(范围 2–20,默认 3)
- 硬编码 3 / 5 替换为 n = repeat_threshold
- config.toml.example 更新 repeat_enabled 注释并新增 repeat_threshold 字段
- 新增 5 个测试,共 17 个复读测试全部通过
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(profile,latex): 修复侧写双复数路径和MathJax等待条件
profile 命令:
- 修复 entity_type 双复数 bug ("users"→"user", "groups"→"group")
profile_storage._profile_path 会追加 "s", handler 传 "users" 导致
路径变为 "userss/", 永远找不到侧写文件
- 新增 "g" 快捷子命令 (/p g 等同于 /p group)
- 更新 config.json 帮助文本和使用说明
LaTeX 渲染:
- 修复 MathJax wait_for_function 逻辑: 旧代码返回 Promise 而非
boolean, Playwright 无法正确判断完成状态, 必然超时
- 改用 pageReady 回调设 window._mjReady 标记, wait 检查该标记
- 超时从 15s 增至 30s
- 添加 MathJax 配置块支持行内数学 ($...$)
测试: 新增 g 快捷方式和私聊拒绝测试, HTML 模板测试
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(summary): 移除等待提示、改用朴素文字输出格式
- 移除 /summary 命令执行前的"📝 正在总结消息,请稍候..."提示消息
- 重写 summary_agent prompt.md: 要求输出朴素纯文字段落,
禁用 emoji、markdown 格式(#、**、列表等)
- 更新 4 个相关测试用例的消息数量断言
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(prompt): 移除示例输出风格,简化段落描述要求
* fix(summary): 对齐主AI消息XML并修复参数透传
- 将 summary_agent 的 fetch_messages 输出改为与主 AI 一致的 XML 消息结构
- 保留群聊中的 group_id、group_name、role、title、level、attachments 等信息
- 将 /summary 和 /sum 解析出的 count、time_range、focus 结构化传给 summary_agent
- 收紧总结输出要求,默认精炼为 2 到 3 个短段落
- 更新相关单测
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(summary): 添加消息总结专用模型配置
- 新增 [models.summary] 配置段,/summary 和 /sum 优先使用该模型
- 未配置时自动回退到 agent_model,向后兼容
- 补充热更新、队列间隔、runtime probe 的 summary_model 注册
- runner 支持 model_config_override 上下文键以覆盖 agent 模型
- 补充相关单测
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat: 假@识别 + /profile 多输出模式
功能1 - 假@识别:
- 新增 utils/fake_at.py: BotNicknameCache 自动获取 bot 群昵称
(per-group asyncio.Lock + TTL 缓存防竞态)
- strip_fake_at() 支持全角@、NFKC 归一化、边界检测防误匹配
- handlers.py: 群消息流程补充假@检测,normalized_text 用于命令解析
- ai_coordinator.py: handle_auto_reply 新增 is_fake_at 参数
功能2 - /profile 多输出模式:
- 群聊默认合并转发,-t 纯文本,-r 渲染图片
- 私聊始终纯文本
- 转发/渲染失败自动回退纯文本
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* refactor(profile): 优化合并转发和渲染输出格式
- 移除 emoji 标题,合并转发改为 2 节点:元数据 + 完整侧写
- 不再截断分消息发送
- 渲染模式:卡片式布局,渐变色元数据头 + 正文区
- 元数据含类型/ID/字数/更新时间
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(profile): 使用 WebUI 配色、提高截断上限至 5000
- 渲染样式改为 WebUI 暖色调 (#f9f5f1/#e6e0d8/#3d3935)
- 截断上限从 3000 提高到 5000
- 更新对应测试用例
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(agents): 新增 arXiv 论文深度分析 agent
- 新增 arxiv_analysis_agent: 下载 arXiv PDF 全文进行结构化学术分析
- 分页读取设计: fetch_paper 获取元数据, read_paper_pages 分批读取避免 token 溢出
- 复用 Undefined.arxiv 模块 (client/downloader/parser)
- 更新 web_agent/callable.json 添加 summary_agent 和 arxiv_analysis_agent
- 新增 18 个单元测试覆盖 handler + tools + config 结构
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* docs: 更新 CLAUDE.md agents 数量为 7
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(tools): 新增 calculator 多功能安全计算器
- 基于 AST 的安全表达式求值,拒绝任何非数学操作
- 支持:算术、幂运算、科学函数、三角函数、统计函数、组合数学
- 常量:pi, e, tau, inf; 函数:sqrt, log, sin, cos, factorial, gcd, mean 等
- allowed_callers: ["*"] 允许所有 agent 调用
- 安全限制:指数上限 10000、表达式长度上限 500
- 55 个单元测试覆盖算术/科学/统计/比较/安全拒绝
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(config): 消息历史限制全面可配置化
- Config 新增 7 个 history_* 配置项,通过 [history] 配置节管理
- 所有消息获取/搜索/分析工具的硬编码限制改为从配置读取
- history_max_records 支持 0=无限制(默认 10000)
- 提升默认值:filtered_result_limit 50→200, summary_fetch_limit 500→1000,
summary_time_fetch_limit 2000→5000, onebot_fetch_limit 5000→10000,
group_analysis_limit 100→500
- config.toml.example 新增双语注释的完整配置节
- 新增 test_history_config.py 验证配置字段和 helper 逻辑
- 更新 test_fetch_messages_tool.py 适配新默认值
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(webui): 长期记忆完整 CRUD 管理
- Runtime API: 新增 POST/PATCH/DELETE /api/v1/memory 端点
- WebUI proxy: 新增记忆增删改代理路由
- 前端: 内联创建表单、行内编辑(Ctrl+Enter保存/Esc取消)、确认删除
- 修复 api.js Content-Type 仅对 POST 自动设置的问题,扩展至 PATCH/PUT/DELETE
- 修复 CORS Allow-Methods 缺少 PATCH/DELETE
- 新增 CSS 样式支持编辑/删除按钮与内联编辑区域
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(profile): 修复 -r 渲染留白和字小问题
- render_html_to_image 新增 viewport_width 参数,默认 1280 不影响其他调用
- profile 渲染使用 480px 窄视口,生成适合手机查看的长图
- 移除 max-width/margin:auto 居中,改为 width:100% 填满视口
- 字号从 12px/14px 提升到 14px/15px,行高 1.8
- 减少 padding 节省空间
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* test: 全面补齐单元测试覆盖 (804 → 1423)
新增 27 个测试文件,619 个测试用例,覆盖以下未测试模块:
纯工具函数:
- utils/xml, cors, time_utils, message_targets, group_metrics
- utils/request_params, member_utils, common, tool_calls
- utils/message_utils, fake_at, cache
AI/Skills:
- ai/parsing, ai/tokens, ai/queue_budget
- skills/http_config, http_client, registry (SkillStats)
- context (RequestContext + helpers)
存储/数据:
- memory (MemoryStorage CRUD + 去重 + 上限)
- faq (FAQ dataclass + FAQStorage CRUD)
- rate_limit (RateLimiter 分角色限流)
- end_summary_storage, token_usage_storage, scheduled_task_storage
- config/models (format_netloc, resolve_bind_hosts)
- utils/qq_emoji
所有 1423 测试通过 ruff + mypy strict(新增文件零错误)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* feat(help): 为 /help 添加 /h 别名
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* feat: profile超管查询 + utils/coerce公共模块 + WebUI安全增强
- feat(profile): 超级管理员可指定目标 /p <QQ号> 或 /p g <群号>
- refactor(utils): 统一 safe_int/safe_float 到 utils/coerce.py,替换8处重复
- feat(webui): 全局JS错误处理 window.onerror + toast
- feat(webui): AbortController 请求取消,Tab切换时终止旧请求
- test: 新增6个profile超管指定目标测试
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(webui): 骨架屏CSS + 日志时间过滤 + 资源趋势图 + TOML原始视图
- 添加 .skeleton shimmer 动画 CSS 类 (components.css)
- 日志页新增 datetime-local 时间范围过滤 (log-view.js, state.js)
- 概览页新增 Canvas CPU/内存实时趋势图 (bot.js, 120点历史)
- 配置页新增「查看 TOML」原始文本切换 (main.js, config toggle)
- 新增 i18n 翻译项 (overview.chart, config.view_toml/form)
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(webui): add Cmd/Ctrl+K command palette
- CSS: overlay, palette card, input, list items, keyboard hint styles
- HTML: modal overlay with input and list container
- i18n: zh/en strings for all palette commands
- JS: command list (tab nav, refresh, logout), open/close,
keyboard navigation (arrows + enter), fuzzy filtering,
Ctrl/Cmd+K toggle, Escape to close, click-outside dismiss
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* feat(webui): add config version history and rollback backend
- Add _backup_config() helper that creates timestamped backups in
data/config_backups/ with a 50-backup cap
- Auto-backup before every config save (POST /api/config) and patch
(POST /api/patch)
- GET /api/config/history — list all backups (newest first)
- POST /api/config/history/restore — restore a backup by name with
TOML validation and auto-backup of current config
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
* feat(webui): add modal focus trap and wire into command palette
- Add stack-based trapFocus/releaseFocus to ui.js
- Wire focus trap into openCmdPalette/closeCmdPalette
- Tab/Shift+Tab cycles within modal boundaries
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* refactor(config): split loader.py into sub-modules
Extract 6 sub-modules from the 3365-line monolith:
- coercers.py: type coercion/normalization helpers (~150 lines)
- resolvers.py: config value resolution (~106 lines)
- admin.py: local admin management (~44 lines)
- webui_settings.py: WebUI settings class (~61 lines)
- model_parsers.py: all model config parsers (~1250 lines)
- domain_parsers.py: domain config parsers + _update_dataclass (~284 lines)
loader.py retains Config class, TOML loading, and all re-exports for
backward compatibility. All 1429 tests pass, mypy strict clean.
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* refactor(api): extract helpers, probes, and OpenAPI from app.py
Split 3077-line monolith into focused modules:
- _helpers.py: utility classes and functions (~345 lines)
- _probes.py: HTTP/WS endpoint health probes (~145 lines)
- _openapi.py: OpenAPI spec builder (~180 lines)
app.py retains RuntimeAPIContext + RuntimeAPIServer (~2500 lines).
TYPE_CHECKING guard prevents circular imports for _openapi.
All 1429 tests pass, mypy strict clean.
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(queue): register historian model in queue interval builder
The build_model_queue_intervals() function was missing the historian
model (glm-5), causing it to fall back to the default 1.0s dispatch
interval instead of the configured 0s. This slowed background historian
tasks unnecessarily.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* perf(handlers): parallelize message preprocessing with asyncio.gather
Group messages: run attachment collection, group info fetch, and
history content parsing concurrently instead of serially. Reduces
preprocessing latency from sum(A+B+C) to max(A,B,C).
Private messages: run attachment collection and history content
parsing concurrently.
No behavioral changes — all data is still fully prepared before
any feature checks (keyword, repeat, bilibili, arxiv, commands, AI).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* refactor(api): split app.py into route submodules
Extract 2491-line app.py into focused route modules under api/routes/:
- health.py (23 lines) — health endpoint
- system.py (228 lines) — OpenAPI + internal/external probes
- memory.py (156 lines) — memory CRUD
- memes.py (222 lines) — meme management (9 handlers)
- cognitive.py (86 lines) — cognitive events & profiles
- chat.py (359 lines) — WebUI chat with SSE streaming
- tools.py (462 lines) — tool invoke with async callbacks
- naga.py (897 lines) — Naga bind/send/unbind + moderation
Infrastructure:
- _context.py: RuntimeAPIContext dataclass (shared import root)
- _naga_state.py: NagaState class (request dedup + inflight tracking)
app.py reduced to 333 lines: server class + thin delegation wrappers
that preserve test compatibility (no test API changes needed).
Updated 4 test files to monkeypatch route-module-level symbols
instead of app-module-level symbols.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* feat(repeat): add cooldown to prevent re-repeating same content
After the bot repeats a message, the same content enters a configurable
cooldown period (default 60 minutes) during which it won't be repeated
again, even if the repeat chain conditions are met again.
Features:
- New config: easter_egg.repeat_cooldown_minutes (default 60, 0=disabled)
- Question mark normalization: ? and ? treated as equivalent for cooldown
- Per-group, per-text independent cooldown tracking
- Cooldown uses monotonic clock (immune to wall-clock changes)
Tests: 8 new repeat cooldown tests + 2 config parsing tests
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* docs: 更新文档覆盖复读冷却、假@检测、profile输出模式与API拆分
- configuration.md: 新增 repeat_threshold、repeat_cooldown_minutes 字段
- slash-commands.md: 补充 /profile 输出模式(-f/-r/-t)与超管定向查看
- ARCHITECTURE.md: 补充 Runtime API 路由子模块层级
- development.md: 补充 config/、api/routes/、utils/ 目录说明
- CHANGELOG.md: 新增 v3.3.2 版本条目
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* docs(changelog): 重写 v3.3.2 条目覆盖所有 feature/u-guess 变更
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* chore(version): bump version to 3.3.2
* fix(test): skip LaTeX render tests when Playwright browser binary missing
The tests only caught ImportError but CI has Playwright installed without
browser binaries, causing a runtime Error. Now checks the error message
returned by execute() for 'Executable doesn't exist' and skips.
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix: address PR review findings from Devin and Codex
- fix(prompt): split concatenated bullet points in judge_meme_image.txt
- fix(prompts): use configurable repeat_threshold instead of hardcoded 3
- fix(memes): guard safe_int(0) → None for group_id sentinel value
- fix(fake_at): normalize cached nicknames with NFKC (not just casefold)
- fix(handlers): skip repeat counting for empty text messages
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix: address additional Devin review flags
- Add 'pic' key to _MEDIA_LABELS for correct tag-based fallback label
- Harden config backup path validation with backslash check and
resolve()-based containment verification
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(config): parse gif_analysis_mode/gif_analysis_frames from TOML
MemeConfig had these fields with defaults but _parse_memes_config
never read them from the [memes] section, silently ignoring user
configuration.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(memes): clean up GIF multi-frame analysis temp files
_prepare_gif_multi_frames creates per-frame PNG files ({uid}_f{i}.png) for
LLM analysis but they were never cleaned up. Add _cleanup_gif_frame_files
helper and call it:
- In _cleanup_meme_artifacts when uid is provided
- In delete_meme
- After judge/describe AI calls complete (frames no longer needed)
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(attachments): skip prompt_ref append on render error
_render_image_tag and _render_file_tag error paths returned early but
attachments.append(record.prompt_ref()) still executed unconditionally.
Change both helpers to return bool; only append on success.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* docs(prompts): strengthen naga_code_analysis_agent call guidance
Reinforce that NagaAgent technical questions must call the agent before
replying. Add explicit when_to_call scenarios and emphasize not relying
on memory for frequently-updated project details.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(prompt): clarify keyword auto-reply is code-path-only to prevent AI mimicry
The AI model (kimi-k2.5) read the system prompt about [系统关键词自动回复]
and mimicked the format via send_message tool, fabricating a reply that
didn't exist in the codebase. Reworded the prompt to explicitly state
these messages are generated by a separate code path (handlers.py),
use fixed responses, and never go through the AI's tool calls.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(repeat): don't silently drop messages when cooldown suppresses repeat
Move the early 'return' inside the else branch (actual repeat sent) so
that when cooldown suppresses a repeat, the message continues through
downstream handlers (bilibili/arxiv/command/AI auto-reply) instead of
being silently dropped.
Update test_repeat_cooldown_suppresses_same_text to verify that
ai_coordinator.handle_auto_reply is still called after suppression.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(memes,repeat): address 3 Devin review bugs
1. Meme reanalyze now uses GIF multi-frame analysis (Flag #1):
_process_reanalyze_job checks record.is_animated + gif_analysis_mode
and calls _prepare_gif_multi_frames, matching the ingest code path.
Frame files are cleaned up in all exit paths.
2. Repeat cooldown dict no longer grows unboundedly (Flag #17):
_record_repeat_cooldown now prunes expired entries on each insert,
preventing slow memory leak from accumulated unique texts per group.
3. GIF frame files cleaned up on retryable LLM errors (Flag #25):
Both judge and describe stages in the ingest path now clean up
multi-frame temp files before re-raising retryable exceptions.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* fix(latex): preserve LaTeX commands like \nu \nabla \neq during \n replacement
The aggressive content.replace('\\n', '\n') destroyed any LaTeX command
starting with \n (\nu, \nabla, \neq, \neg, etc). Use regex with negative
converted to a real newline.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
* feat(vision): add configurable max_tokens to VisionModelConfig
Previously vision model max_tokens was hardcoded (256/512/8192) at call
sites. With thinking-enabled models like kimi-k2.5, the small budgets
were entirely consumed by the thinking chain, leaving no room for
tool-call output.
- Add max_tokens field to VisionModelConfig (default 8192)
- Parse from config.toml [models.vision] and VISION_MODEL_MAX_TOKENS env
- Replace all hardcoded max_tokens in multimodal.py with config value
- Update config.toml.example with documentation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* refactor(historian): use context_recent_messages_limit and XML format
The historian now sees the same message count and XML format as the main
AI, improving disambiguation quality:
- Extract shared format_message_xml() / format_messages_xml() into
utils/xml.py; deduplicate prompts.py and fetch_messages handler
- Historian recent messages use context_recent_messages_limit (default 20)
instead of historian_recent_messages_inject_k (was 12)
- Messages formatted as XML (matching main AI) instead of plain text
bullet list, including attachments and full metadata
- Update historian_rewrite.md prompt to note XML format
- Update tests for new format and import paths
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(webui): cap top_k overflow, debounce meme search, reorder dashboard
- Cap top_k to 500 in frontend (appendPositiveIntParam) and backend
(cognitive service + vector_store _safe_positive_int) to prevent
ChromaDB OverflowError when large integers are passed
- Add max=500 to all top_k HTML inputs
- Cap fetch_k to 10000 in vector_store._query() as safety net
- Add debounced auto-search (350ms) for meme text inputs with
pending-refresh pattern to avoid stale results
- Enter key flushes debounce timer for instant search
- Move 运行环境 card before 资源趋势 chart in dashboard layout
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(hot_reload,repeat): address 2 Devin review bugs
- Add historian_model to hot_reload tracking sets (_QUEUE_INTERVAL_KEYS
and _MODEL_NAME_KEYS) so config changes take effect without restart
- Fix memory leak in _record_repeat_cooldown: skip recording entirely
when cooldown_minutes=0 instead of accumulating never-evicted entries
- Add test assertion verifying no cooldown entries when disabled
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* docs(changelog): update v3.3.2 with all session fixes and improvements
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* docs(changelog): simplify v3.3.2 format to match prior versions
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* docs(webui): add WebUI usage guide and cross-reference links
- Create docs/webui-guide.md covering all 8 tabs, config, shortcuts, FAQ
- Add link in README.md documentation navigation section
- Add reference in docs/deployment.md startup section
- Add reference in docs/management-api.md recommended entry section
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(calculator): cap combinatorial function args to prevent CPU exhaustion
Add _MAX_COMBINATORIAL_ARG=1000 limit for factorial/perm/comb to prevent
adversarial inputs like factorial(99999) from consuming excessive CPU.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
[PR] Add Model Context Protocol (MCP) Support
概述
本 PR 为 Undefined 机器人添加了 Model Context Protocol (MCP) 协议支持层,允许机器人连接外部 MCP 服务器,大幅扩展 AI 的工具能力和数据访问范围。
主要变更
✨ 新增功能
src/Undefined/skills/toolsets/mcp/模块。MCPToolSetRegistry,负责加载 MCP 配置、连接服务器、转换工具格式。mcp.{server_name}.{tool_name}格式。mcp.filesystem.read_file、mcp.brave-search.search。config/mcp.json.example配置示例。MCP_CONFIG_PATH自定义配置路径。.gitignore保护敏感信息。README.md新增 MCP 配置章节,包含详细使用说明。src/Undefined/skills/toolsets/mcp/README.md专门技术文档。⚙️ 技术改进
fastmcp库实现高效 MCP 客户端。🛠️ 支持的 MCP 服务器示例
@modelcontextprotocol/server-filesystem)@modelcontextprotocol/server-brave-search)@modelcontextprotocol/server-sqlite)@modelcontextprotocol/server-github)@upstash/context7-mcp)howtocook-mcp)📂 文件变更
src/Undefined/skills/toolsets/mcp/__init__.py(264 行)src/Undefined/skills/toolsets/mcp/README.md(172 行)config/mcp.json.examplesrc/Undefined/skills/tools/__init__.py(工具集成)src/Undefined/ai.py(工具调用优化)README.md(文档更新).gitignore、.env.example(配置保护)pyproject.toml、uv.lock(添加fastmcp)