Skip to content

fix: prevent proactive replies from polluting long-term conversation history#7624

Open
AgIzT wants to merge 2 commits intoAstrBotDevs:masterfrom
AgIzT:fix/active-reply-contexts
Open

fix: prevent proactive replies from polluting long-term conversation history#7624
AgIzT wants to merge 2 commits intoAstrBotDevs:masterfrom
AgIzT:fix/active-reply-contexts

Conversation

@AgIzT
Copy link
Copy Markdown

@AgIzT AgIzT commented Apr 17, 2026

This PR fixes the issue reported in #7622.

When both of the following are enabled:

  • provider_ltm_settings.group_icl_enable = true
  • provider_ltm_settings.active_reply.enable = true

one successful proactive reply can cause the temporary chatroom-style exchange to be written back into conv.history through the generic persistence flow. This pollutes the real user-to-bot conversation history stored in the database. After that, even normal passive @bot messages may appear to "forget" previously remembered facts because they load corrupted history from the session record.

The core problem is not only that req.contexts is cleared in memory. The more important issue is that the proactive-reply chatroom prompt and model response are incorrectly persisted into long-term conversation history.

Modifications

This fix only changes two files under astrbot/builtin_stars/astrbot/. It does not modify the core persistence pipeline, does not introduce new configuration fields, and does not add any new dependencies.

  1. astrbot/builtin_stars/astrbot/main.py
  • The proactive-reply path no longer passes the current conv into request_llm(...)
  • Proactive replies now use conversation=None
  • This prevents proactive replies from going through the generic history-saving path and overwriting conv.history with chatroom content
  • A _ltm_active_reply_trigger marker is attached to the ProviderRequest instance to indicate that the current request was actually triggered by a proactive reply
  1. astrbot/builtin_stars/astrbot/long_term_memory.py
  • The on_req_llm(...) branch condition is changed from "only check enable_active_reply" to "the feature is enabled and the current request is truly triggered by a proactive reply"
  • Only proactive-reply requests enter the chatroom rewrite branch:
    • recent group-chat context is appended into the prompt
    • req.contexts is cleared
  • Passive @bot requests always preserve req.contexts and continue to use the real long-term conversation history
  1. Behavioral result of the fix
  • Proactive replies still keep their original chatroom-style behavior

  • Proactive replies no longer pollute the database-backed long-term conversation history

  • Passive @bot requests can still recall previously established session facts after proactive replies occur

  • This is NOT a breaking change.

Notes

Note: because proactive replies no longer bind to the current conversation, conversation-level persona / skills injection is also skipped for those proactive replies. This is an intentional tradeoff to prevent chatroom exchanges from polluting long-term conversation history.

Screenshots or Test Results

Validation environment:

  • AstrBot version: v4.23.1
  • Deployment method: Docker
  • Provider used: Gemini
  • Messaging platform used: NapCat QQ
  • OS: Linux

Verification steps:

  1. Enable:
    • provider_ltm_settings.group_icl_enable = true
    • provider_ltm_settings.active_reply.enable = true
    • group_message_max_cnt = 20
  2. Build a normal passive @bot conversation and let the bot remember a custom nickname or fact
  3. Ask again and confirm that the bot still remembers it
  4. Wait until one proactive reply is triggered in the group
  5. Ask the same question again through a normal passive @bot message
  6. Inspect the Web UI conversation history

Result before the fix:

  • once a proactive reply is triggered, the long-term conversation history can be overwritten by chatroom content
  • later passive @bot requests can no longer correctly recall previously remembered facts
  • the Web UI shows recent group-chat context and the bot's chatroom-style reply instead of the original user-to-bot conversation history

Result after the fix:

  • proactive replies still trigger normally and keep the same chatroom-style behavior
  • proactive-reply-related group-chat context no longer enters the persisted conversation history
  • proactive replies no longer overwrite existing conv.history
  • later passive @bot requests can still correctly recall previously established long-term session facts
  • even after multiple proactive replies, the Web UI conversation history remains the real user-to-bot conversation history

Notes:

  • This issue is a logic and persistence-corruption bug, so it usually does not produce an exception or traceback
  • Because of that, validation mainly relies on stable reproduction, Web UI history changes, and before/after behavior comparison

Checklist

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / This PR does not add a new feature; the related problem has already been documented in Issue [Bug] 开启主动回复和群聊上下文感知后,会话的长期记忆会被群聊上下文覆盖 #7622.

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / This change has been validated, and the verification steps and test results are provided above.

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / No new dependencies were introduced by this change.

  • 😮 My changes do not introduce malicious code.
    / This change does not introduce malicious code.

Summary by Sourcery

Prevent proactive group replies from mutating long-term conversation history while preserving their chatroom-style behavior.

Bug Fixes:

  • Ensure proactive replies no longer persist chatroom-style exchanges into database-backed long-term conversation history.
  • Restrict long-term memory chatroom rewrite logic to requests explicitly triggered by proactive replies so passive @bot messages retain correct context.

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 17, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the proactive reply mechanism to avoid chat history pollution by passing a null conversation and tagging requests with a dynamic attribute. Reviewers noted that disabling the conversation entirely might strip the bot of its persona and skills, suggesting a non-persistent conversation as a better alternative. There are also concerns regarding the use of setattr for internal state, which could fail if the request object has strict validation or is copied during processing.

Comment thread astrbot/builtin_stars/astrbot/main.py
Comment thread astrbot/builtin_stars/astrbot/main.py
@AgIzT AgIzT changed the title feat(ltm): prevent proactive replies from polluting long-term conversation history fix(ltm): prevent proactive replies from polluting long-term conversation history Apr 17, 2026
@AgIzT AgIzT changed the title fix(ltm): prevent proactive replies from polluting long-term conversation history fix: prevent proactive replies from polluting long-term conversation history Apr 17, 2026
@Soulter Soulter force-pushed the master branch 2 times, most recently from faf411f to 0068960 Compare April 19, 2026 09:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant