Skip to content

feat: v3.3.2 架构重构、假@检测与认知侧写增强#55

Merged
69gg merged 57 commits intomainfrom
feature/u-guess
Apr 19, 2026
Merged

feat: v3.3.2 架构重构、假@检测与认知侧写增强#55
69gg merged 57 commits intomainfrom
feature/u-guess

Conversation

@69gg
Copy link
Copy Markdown
Owner

@69gg 69gg commented Apr 19, 2026

概述

围绕核心架构进行了大规模重构与功能增强:Runtime API 拆分为路由子模块、配置系统模块化拆分、新增假@检测机制与 /profile 多输出模式。同步引入复读机制全面升级、消息预处理并行化、WebUI 多项交互功能,以及 arXiv 论文分析 Agent 和安全计算器工具。

主要变更

🚀 新功能

  • 假@检测:群聊中 @+Bot昵称 的文本形式也被识别为@消息,自动从群上下文获取昵称(防竞态),@昵称 /命令 可正常触发斜杠指令
  • /profile 多输出模式-f 合并转发(默认)、-r 渲染为图片、-t 直接文本发送
  • 超管跨目标侧写/p <QQ号>/p g <群号> 查看任意用户/群的侧写
  • 复读系统升级:可配置触发阈值 repeat_threshold(2–20)、Bot 发言不计入复读链、复读冷却机制 repeat_cooldown_minutes
  • arXiv 论文深度分析 Agent
  • 多功能安全计算器工具
  • 消息历史限制可配置化[history].max_records
  • WebUI:命令面板 (Cmd/Ctrl+K)、骨架屏加载、日志时间过滤、资源趋势图、TOML 原始视图、配置版本历史与回滚、长期记忆 CRUD 管理

🏗️ 重构

  • Runtime API 拆分app.py(2491 行)→ 8 个路由子模块 (api/routes/),主文件仅保留薄包装委派
  • 配置系统模块化config/ 拆为 loader.pymodels.pyhot_reload.py
  • 新增公共模块utils/coerce.py(安全类型强转)、utils/fake_at.py(假@文本检测)

⚡ 性能

  • 消息预处理流程并行化:asyncio.gather 并行执行安全检查、认知检索和假@检测

🐛 修复

  • 队列系统 historian 模型未注册的调度问题
  • /profile 渲染留白和字体过小,使用 WebUI 配色方案
  • 侧写双复数路径和 MathJax 等待条件
  • 消息总结参数透传与输出格式

🧪 测试

  • 测试覆盖从 ~804 提升至 1438+
  • ruff + mypy 零错误

验证命令

uv run pytest tests/          # 1438+ 测试全部通过
uv run ruff format --check .  # 格式化检查
uv run ruff check .           # Lint 检查
uv run mypy .                 # 严格类型检查

Open in Devin Review

69gg and others added 30 commits April 17, 2026 23:52
- Add repeat_enabled and inverted_question_enabled config fields under [easter_egg]
- Implement group chat repeat: auto-repeat when 3 consecutive identical messages from different senders
- Inverted question mark: send ¿ instead of ? when repeat triggers on question-mark-only messages
- Race-condition protection via per-group asyncio.Lock
- Inject easter egg status into AI prompt context (model config info + system behavior)
- Update config.toml.example and docs/configuration.md
- Add tests for config loading (5), handler logic (12), and prompt injection (10)
- Reorder sections: source deployment first, pip/uv tool second
- Update intro to state source deployment is the recommended primary method
- Add warning note on pip/uv tool section about incomplete support and testing
- Move Management-first flow into source deployment section
- Fix cross-references to point upward instead of downward
- render_latex: remove mathtext fallback, enforce usetex=True strictly;
  add _strip_document_wrappers to handle \begin{document} input;
  catch RuntimeError and return helpful install prompt on missing TeX
- docs/build.md: add mandatory system LaTeX installation section with
  per-platform commands (Debian/Arch/macOS/Windows) and verification steps
- docs/deployment.md: integrate LaTeX install steps into source deployment
  workflow as step 3; add reminder callout in pip/uv tool section
- docs/usage.md: full rewrite with complete feature reference covering
  all Agents, Toolsets, Tools, scheduler modes, FAQ commands, slash
  command permission table, and multi-model pool
- tests/test_render_latex_tool.py: add 4 tests covering wrapper stripping,
  successful embed delivery, and missing TeX error handling
…me auto-match

- Attachment hash dedup: same scope+kind+SHA256 returns existing record
- Unified <attachment> tag: routes image/file by UID prefix, backward-compat <pic>
- Centralized dispatch_pending_file_sends() for non-image file delivery
- LaTeX rendering: migrate from matplotlib to MathJax + Playwright (no system TeX)
- LaTeX: support PNG and PDF output via output_format parameter
- Meme auto-match: annotate incoming images with meme descriptions by SHA256
- Update both prompt XML files with unified attachment tag documentation
- 37 new tests (713 total, all passing)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Read use_proxy/http_proxy/https_proxy from runtime_config and forward
to chromium.launch(proxy=...) so MathJax CDN loads correctly on
servers requiring a proxy. Also update use_proxy comment to be generic.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- ConfigTemplateSyncResult 新增 updated_comment_paths 字段
- sync_config_text() 对比 current/example 注释,记录有改动的路径
- 脚本以 ~ 前缀展示注释更新项数量和路径列表
- 修复测试 mock 对象缺少新字段
- config-form.js:gif_analysis_mode 渲染为 grid/multi 下拉框

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- bot 消息写入 counter(而非过滤)使窗口对其可见
- 触发条件增加 bot_qq not in senders 检查
  覆盖三种情形:bot 先发、bot 中间插入、bot 滑出窗口后正常触发
- RuntimeConfig 新增 repeat_threshold(范围 2–20,默认 3)
- 硬编码 3 / 5 替换为 n = repeat_threshold
- config.toml.example 更新 repeat_enabled 注释并新增 repeat_threshold 字段
- 新增 5 个测试,共 17 个复读测试全部通过

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
profile 命令:
- 修复 entity_type 双复数 bug ("users"→"user", "groups"→"group")
  profile_storage._profile_path 会追加 "s", handler 传 "users" 导致
  路径变为 "userss/", 永远找不到侧写文件
- 新增 "g" 快捷子命令 (/p g 等同于 /p group)
- 更新 config.json 帮助文本和使用说明

LaTeX 渲染:
- 修复 MathJax wait_for_function 逻辑: 旧代码返回 Promise 而非
  boolean, Playwright 无法正确判断完成状态, 必然超时
- 改用 pageReady 回调设 window._mjReady 标记, wait 检查该标记
- 超时从 15s 增至 30s
- 添加 MathJax 配置块支持行内数学 ($...$)

测试: 新增 g 快捷方式和私聊拒绝测试, HTML 模板测试

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 移除 /summary 命令执行前的"📝 正在总结消息,请稍候..."提示消息
- 重写 summary_agent prompt.md: 要求输出朴素纯文字段落,
  禁用 emoji、markdown 格式(#、**、列表等)
- 更新 4 个相关测试用例的消息数量断言

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 将 summary_agent 的 fetch_messages 输出改为与主 AI 一致的 XML 消息结构
- 保留群聊中的 group_id、group_name、role、title、level、attachments 等信息
- 将 /summary 和 /sum 解析出的 count、time_range、focus 结构化传给 summary_agent
- 收紧总结输出要求,默认精炼为 2 到 3 个短段落
- 更新相关单测

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 新增 [models.summary] 配置段,/summary 和 /sum 优先使用该模型
- 未配置时自动回退到 agent_model,向后兼容
- 补充热更新、队列间隔、runtime probe 的 summary_model 注册
- runner 支持 model_config_override 上下文键以覆盖 agent 模型
- 补充相关单测

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
功能1 - 假@识别:
- 新增 utils/fake_at.py: BotNicknameCache 自动获取 bot 群昵称
  (per-group asyncio.Lock + TTL 缓存防竞态)
- strip_fake_at() 支持全角@、NFKC 归一化、边界检测防误匹配
- handlers.py: 群消息流程补充假@检测,normalized_text 用于命令解析
- ai_coordinator.py: handle_auto_reply 新增 is_fake_at 参数

功能2 - /profile 多输出模式:
- 群聊默认合并转发,-t 纯文本,-r 渲染图片
- 私聊始终纯文本
- 转发/渲染失败自动回退纯文本

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 移除 emoji 标题,合并转发改为 2 节点:元数据 + 完整侧写
- 不再截断分消息发送
- 渲染模式:卡片式布局,渐变色元数据头 + 正文区
- 元数据含类型/ID/字数/更新时间

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 渲染样式改为 WebUI 暖色调 (#f9f5f1/#e6e0d8/#3d3935)
- 截断上限从 3000 提高到 5000
- 更新对应测试用例

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 新增 arxiv_analysis_agent: 下载 arXiv PDF 全文进行结构化学术分析
- 分页读取设计: fetch_paper 获取元数据, read_paper_pages 分批读取避免 token 溢出
- 复用 Undefined.arxiv 模块 (client/downloader/parser)
- 更新 web_agent/callable.json 添加 summary_agent 和 arxiv_analysis_agent
- 新增 18 个单元测试覆盖 handler + tools + config 结构

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 基于 AST 的安全表达式求值,拒绝任何非数学操作
- 支持:算术、幂运算、科学函数、三角函数、统计函数、组合数学
- 常量:pi, e, tau, inf; 函数:sqrt, log, sin, cos, factorial, gcd, mean 等
- allowed_callers: ["*"] 允许所有 agent 调用
- 安全限制:指数上限 10000、表达式长度上限 500
- 55 个单元测试覆盖算术/科学/统计/比较/安全拒绝

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Config 新增 7 个 history_* 配置项,通过 [history] 配置节管理
- 所有消息获取/搜索/分析工具的硬编码限制改为从配置读取
- history_max_records 支持 0=无限制(默认 10000)
- 提升默认值:filtered_result_limit 50→200, summary_fetch_limit 500→1000,
  summary_time_fetch_limit 2000→5000, onebot_fetch_limit 5000→10000,
  group_analysis_limit 100→500
- config.toml.example 新增双语注释的完整配置节
- 新增 test_history_config.py 验证配置字段和 helper 逻辑
- 更新 test_fetch_messages_tool.py 适配新默认值

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Runtime API: 新增 POST/PATCH/DELETE /api/v1/memory 端点
- WebUI proxy: 新增记忆增删改代理路由
- 前端: 内联创建表单、行内编辑(Ctrl+Enter保存/Esc取消)、确认删除
- 修复 api.js Content-Type 仅对 POST 自动设置的问题,扩展至 PATCH/PUT/DELETE
- 修复 CORS Allow-Methods 缺少 PATCH/DELETE
- 新增 CSS 样式支持编辑/删除按钮与内联编辑区域

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
- render_html_to_image 新增 viewport_width 参数,默认 1280 不影响其他调用
- profile 渲染使用 480px 窄视口,生成适合手机查看的长图
- 移除 max-width/margin:auto 居中,改为 width:100% 填满视口
- 字号从 12px/14px 提升到 14px/15px,行高 1.8
- 减少 padding 节省空间

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
新增 27 个测试文件,619 个测试用例,覆盖以下未测试模块:

纯工具函数:
- utils/xml, cors, time_utils, message_targets, group_metrics
- utils/request_params, member_utils, common, tool_calls
- utils/message_utils, fake_at, cache

AI/Skills:
- ai/parsing, ai/tokens, ai/queue_budget
- skills/http_config, http_client, registry (SkillStats)
- context (RequestContext + helpers)

存储/数据:
- memory (MemoryStorage CRUD + 去重 + 上限)
- faq (FAQ dataclass + FAQStorage CRUD)
- rate_limit (RateLimiter 分角色限流)
- end_summary_storage, token_usage_storage, scheduled_task_storage
- config/models (format_netloc, resolve_bind_hosts)
- utils/qq_emoji

所有 1423 测试通过 ruff + mypy strict(新增文件零错误)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
- feat(profile): 超级管理员可指定目标 /p <QQ号> 或 /p g <群号>
- refactor(utils): 统一 safe_int/safe_float 到 utils/coerce.py,替换8处重复
- feat(webui): 全局JS错误处理 window.onerror + toast
- feat(webui): AbortController 请求取消,Tab切换时终止旧请求
- test: 新增6个profile超管指定目标测试

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- 添加 .skeleton shimmer 动画 CSS 类 (components.css)
- 日志页新增 datetime-local 时间范围过滤 (log-view.js, state.js)
- 概览页新增 Canvas CPU/内存实时趋势图 (bot.js, 120点历史)
- 配置页新增「查看 TOML」原始文本切换 (main.js, config toggle)
- 新增 i18n 翻译项 (overview.chart, config.view_toml/form)

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- CSS: overlay, palette card, input, list items, keyboard hint styles
- HTML: modal overlay with input and list container
- i18n: zh/en strings for all palette commands
- JS: command list (tab nav, refresh, logout), open/close,
  keyboard navigation (arrows + enter), fuzzy filtering,
  Ctrl/Cmd+K toggle, Escape to close, click-outside dismiss

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add _backup_config() helper that creates timestamped backups in
  data/config_backups/ with a 50-backup cap
- Auto-backup before every config save (POST /api/config) and patch
  (POST /api/patch)
- GET /api/config/history — list all backups (newest first)
- POST /api/config/history/restore — restore a backup by name with
  TOML validation and auto-backup of current config

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
- Add stack-based trapFocus/releaseFocus to ui.js
- Wire focus trap into openCmdPalette/closeCmdPalette
- Tab/Shift+Tab cycles within modal boundaries

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Extract 6 sub-modules from the 3365-line monolith:
- coercers.py: type coercion/normalization helpers (~150 lines)
- resolvers.py: config value resolution (~106 lines)
- admin.py: local admin management (~44 lines)
- webui_settings.py: WebUI settings class (~61 lines)
- model_parsers.py: all model config parsers (~1250 lines)
- domain_parsers.py: domain config parsers + _update_dataclass (~284 lines)

loader.py retains Config class, TOML loading, and all re-exports for
backward compatibility. All 1429 tests pass, mypy strict clean.

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

- fix(prompt): split concatenated bullet points in judge_meme_image.txt
- fix(prompts): use configurable repeat_threshold instead of hardcoded 3
- fix(memes): guard safe_int(0) → None for group_id sentinel value
- fix(fake_at): normalize cached nicknames with NFKC (not just casefold)
- fix(handlers): skip repeat counting for empty text messages

Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

- Add 'pic' key to _MEDIA_LABELS for correct tag-based fallback label
- Harden config backup path validation with backslash check and
  resolve()-based containment verification

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

MemeConfig had these fields with defaults but _parse_memes_config
never read them from the [memes] section, silently ignoring user
configuration.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

_prepare_gif_multi_frames creates per-frame PNG files ({uid}_f{i}.png) for
LLM analysis but they were never cleaned up. Add _cleanup_gif_frame_files
helper and call it:
- In _cleanup_meme_artifacts when uid is provided
- In delete_meme
- After judge/describe AI calls complete (frames no longer needed)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

69gg and others added 2 commits April 19, 2026 15:38
_render_image_tag and _render_file_tag error paths returned early but
attachments.append(record.prompt_ref()) still executed unconditionally.
Change both helpers to return bool; only append on success.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Reinforce that NagaAgent technical questions must call the agent before
replying. Add explicit when_to_call scenarios and emphasize not relying
on memory for frequently-updated project details.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

69gg and others added 3 commits April 19, 2026 15:57
…AI mimicry

The AI model (kimi-k2.5) read the system prompt about [系统关键词自动回复]
and mimicked the format via send_message tool, fabricating a reply that
didn't exist in the codebase. Reworded the prompt to explicitly state
these messages are generated by a separate code path (handlers.py),
use fixed responses, and never go through the AI's tool calls.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
…peat

Move the early 'return' inside the else branch (actual repeat sent) so
that when cooldown suppresses a repeat, the message continues through
downstream handlers (bilibili/arxiv/command/AI auto-reply) instead of
being silently dropped.

Update test_repeat_cooldown_suppresses_same_text to verify that
ai_coordinator.handle_auto_reply is still called after suppression.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
1. Meme reanalyze now uses GIF multi-frame analysis (Flag #1):
   _process_reanalyze_job checks record.is_animated + gif_analysis_mode
   and calls _prepare_gif_multi_frames, matching the ingest code path.
   Frame files are cleaned up in all exit paths.

2. Repeat cooldown dict no longer grows unboundedly (Flag #17):
   _record_repeat_cooldown now prunes expired entries on each insert,
   preventing slow memory leak from accumulated unique texts per group.

3. GIF frame files cleaned up on retryable LLM errors (Flag #25):
   Both judge and describe stages in the ingest path now clean up
   multi-frame temp files before re-raising retryable exceptions.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
devin-ai-integration[bot]

This comment was marked as resolved.

69gg and others added 2 commits April 19, 2026 16:37
…placement

The aggressive content.replace('\\n', '\n') destroyed any LaTeX command
starting with \n (\nu, \nabla, \neq, \neg, etc). Use regex with negative
converted to a real newline.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@openai.com>
Previously vision model max_tokens was hardcoded (256/512/8192) at call
sites. With thinking-enabled models like kimi-k2.5, the small budgets
were entirely consumed by the thinking chain, leaving no room for
tool-call output.

- Add max_tokens field to VisionModelConfig (default 8192)
- Parse from config.toml [models.vision] and VISION_MODEL_MAX_TOKENS env
- Replace all hardcoded max_tokens in multimodal.py with config value
- Update config.toml.example with documentation

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

69gg and others added 2 commits April 19, 2026 17:21
The historian now sees the same message count and XML format as the main
AI, improving disambiguation quality:

- Extract shared format_message_xml() / format_messages_xml() into
  utils/xml.py; deduplicate prompts.py and fetch_messages handler
- Historian recent messages use context_recent_messages_limit (default 20)
  instead of historian_recent_messages_inject_k (was 12)
- Messages formatted as XML (matching main AI) instead of plain text
  bullet list, including attachments and full metadata
- Update historian_rewrite.md prompt to note XML format
- Update tests for new format and import paths

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Cap top_k to 500 in frontend (appendPositiveIntParam) and backend
  (cognitive service + vector_store _safe_positive_int) to prevent
  ChromaDB OverflowError when large integers are passed
- Add max=500 to all top_k HTML inputs
- Cap fetch_k to 10000 in vector_store._query() as safety net
- Add debounced auto-search (350ms) for meme text inputs with
  pending-refresh pattern to avoid stale results
- Enter key flushes debounce timer for instant search
- Move 运行环境 card before 资源趋势 chart in dashboard layout

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

69gg and others added 4 commits April 19, 2026 17:56
- Add historian_model to hot_reload tracking sets (_QUEUE_INTERVAL_KEYS
  and _MODEL_NAME_KEYS) so config changes take effect without restart
- Fix memory leak in _record_repeat_cooldown: skip recording entirely
  when cooldown_minutes=0 instead of accumulating never-evicted entries
- Add test assertion verifying no cooldown entries when disabled

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Create docs/webui-guide.md covering all 8 tabs, config, shortcuts, FAQ
- Add link in README.md documentation navigation section
- Add reference in docs/deployment.md startup section
- Add reference in docs/management-api.md recommended entry section

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
devin-ai-integration[bot]

This comment was marked as resolved.

…stion

Add _MAX_COMBINATORIAL_ARG=1000 limit for factorial/perm/comb to prevent
adversarial inputs like factorial(99999) from consuming excessive CPU.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 41 additional findings in Devin Review.

Open in Devin Review

Comment thread src/Undefined/handlers.py
@69gg 69gg merged commit 5a8b20a into main Apr 19, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant