Skip to content

fix: handle tool_calls correctly in streaming mode for OpenAI provider#6354

Closed
LovieCode wants to merge 3 commits intoAstrBotDevs:masterfrom
LovieCode:fix/streaming-tool-call
Closed

fix: handle tool_calls correctly in streaming mode for OpenAI provider#6354
LovieCode wants to merge 3 commits intoAstrBotDevs:masterfrom
LovieCode:fix/streaming-tool-call

Conversation

@LovieCode
Copy link
Copy Markdown
Contributor

@LovieCode LovieCode commented Mar 15, 2026

Problem / 问题:
When using streaming mode with OpenAI-compatible providers, tool calls are not executed. The AI generates a response but no tools are called, and no final reply is sent to the user.

Root Cause / 根本原因:
In streaming mode, ChatCompletionStreamState.get_final_completion() returns ParsedFunctionToolCall objects where type is None, instead of ChatCompletionMessageFunctionToolCall where type is "function". The original code checked tool_call.type == "function", which always failed for parsed tool calls, causing tool matching to fail.

Fix / 修复:
Use getattr to safely access function.name and arguments instead of relying on the type field. This makes the code work with both ParsedFunctionToolCall (type=None) and ChatCompletionMessageFunctionToolCall (type="function").


Modifications / 改动点

  • Modified astrbot/core/provider/sources/openai_source.py

    • Changed tool call matching logic to use getattr for type-safe access to function.name and arguments
    • Added null checks for tool_call.id to prevent KeyError in extra_content handling
  • This is NOT a breaking change. / 这不是一个破坏性变更。


Screenshots or Test Results / 运行截图或测试结果

Test scenario: Ask AI to send an image in streaming mode

Before fix:

  • Saving chunk state error warning appears
  • AI generates text but tool calls are not executed
  • No final response sent to user

After fix:

[INFO] Agent 使用工具: ['astrbot_execute_shell']
[INFO] 使用工具:astrbot_execute_shell,参数:{'command': 'dir ...'}
[INFO] Tool `astrbot_execute_shell` Result: ...
[INFO] Agent 使用工具: ['send_message_to_user']
[INFO] 使用工具:send_message_to_user,参数:{'messages': [{'type': 'image', ...}]}
[INFO] Tool `send_message_to_user` Result: Message sent to session ...

Tool calls are now executed correctly in streaming mode.


Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了"验证步骤"和"运行截图"。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

Fix OpenAI streaming provider tool-call parsing so tools are correctly invoked and extra content is handled safely.

Bug Fixes:

  • Ensure tool calls returned from streaming completions are matched and executed even when the tool call type field is missing or None.
  • Prevent KeyError when handling extra_content for tool calls that may not have an id.

@auto-assign auto-assign bot requested review from Raven95676 and Soulter March 15, 2026 12:10
@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Mar 15, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the streaming mode for OpenAI-compatible providers, where AI agents failed to execute tool calls. The fix ensures that the system correctly parses and processes different types of tool call objects, allowing the AI to utilize its tools effectively and deliver complete, intended responses to users. This significantly improves the reliability and functionality of AI interactions involving external tools in a streaming context.

Highlights

  • Tool Call Execution Fix: Resolved an issue where tool calls were not executed in streaming mode for OpenAI-compatible providers, leading to incomplete AI responses.
  • Root Cause Identified: The problem stemmed from ParsedFunctionToolCall objects having a type of None, which caused the original code's tool_call.type == "function" check to fail.
  • Robust Tool Call Parsing: Implemented getattr for safer access to function.name and arguments, making the code compatible with both ParsedFunctionToolCall (type=None) and ChatCompletionMessageFunctionToolCall (type="function").
  • Null Checks for Tool IDs: Added explicit null checks for tool_call.id to prevent KeyError when handling extra_content.
Changelog
  • astrbot/core/provider/sources/openai_source.py
    • Updated the tool call matching logic to use getattr for robust access to function names and arguments, accommodating ParsedFunctionToolCall objects with type=None.
    • Introduced null checks for tool_call.id to prevent errors during the handling of extra_content.
Activity
  • The author provided detailed 'Verification Steps' and 'Screenshots or Test Results' demonstrating the fix's effectiveness, showing successful tool execution in streaming mode where it previously failed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • When matching tools, hasattr(tool_func, "name") is redundant given that tool_func comes from getattr(tool_call, "function", None) and is only used in equality checks; consider simplifying the condition to just if tool_func and tool.name == tool_func.name for clarity.
  • In the tool-call handling branch, if tool_func_args is None the current logic will append None to args_ls; if downstream code expects a dict, it may be safer to skip such calls or default to {} to keep the list contents consistent.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- When matching tools, `hasattr(tool_func, "name")` is redundant given that `tool_func` comes from `getattr(tool_call, "function", None)` and is only used in equality checks; consider simplifying the condition to just `if tool_func and tool.name == tool_func.name` for clarity.
- In the tool-call handling branch, if `tool_func_args` is `None` the current logic will append `None` to `args_ls`; if downstream code expects a dict, it may be safer to skip such calls or default to `{}` to keep the list contents consistent.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug where tool calls were not being executed in streaming mode for OpenAI-compatible providers. The fix involves using getattr for safer attribute access on tool call objects, which can have different structures, and adds checks for potentially missing id attributes to prevent errors. The changes are logical and effectively solve the issue. I have one minor suggestion to improve code readability by simplifying a conditional check.

Comment thread astrbot/core/provider/sources/openai_source.py Outdated
@dosubot dosubot bot added the area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. label Mar 15, 2026
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 17, 2026
@Soulter
Copy link
Copy Markdown
Member

Soulter commented Mar 17, 2026

我看了一下 openai sdk的代码,

class ParsedFunctionToolCall(ChatCompletionMessageFunctionToolCall):
    function: ParsedFunction
    """The function that the model called."""

ParsedFunctionToolCall 继承自 ChatCompletionMessageFunctionToolCall,

class ChatCompletionMessageFunctionToolCall(BaseModel):
    """A call to a function tool created by the model."""

    id: str
    """The ID of the tool call."""

    function: Function
    """The function that the model called."""

    type: Literal["function"]
    """The type of the tool. Currently, only `function` is supported."""

正常来说是有type的

def parse_chat_completion(
    *,
    response_format: type[ResponseFormatT] | completion_create_params.ResponseFormat | Omit,
    input_tools: Iterable[ChatCompletionToolUnionParam] | Omit,
    chat_completion: ChatCompletion | ParsedChatCompletion[object],
) -> ParsedChatCompletion[ResponseFormatT]:
    if is_given(input_tools):
        input_tools = [t for t in input_tools]
    else:
        input_tools = []

    choices: list[ParsedChoice[ResponseFormatT]] = []
    for choice in chat_completion.choices:
        if choice.finish_reason == "length":
            raise LengthFinishReasonError(completion=chat_completion)

        if choice.finish_reason == "content_filter":
            raise ContentFilterFinishReasonError()

        message = choice.message

        tool_calls: list[ParsedFunctionToolCall] = []
        if message.tool_calls:
            for tool_call in message.tool_calls:
                if tool_call.type == "function":
                    tool_call_dict = tool_call.to_dict()
                    tool_calls.append(
                        construct_type_unchecked(
                            value={
                                **tool_call_dict,
                                "function": {
                                    **cast(Any, tool_call_dict["function"]),
                                    "parsed_arguments": parse_function_tool_arguments(
                                        input_tools=input_tools, function=tool_call.function
                                    ),
                                },
                            },
                            type_=ParsedFunctionToolCall,
                        )
                    )

@Soulter
Copy link
Copy Markdown
Member

Soulter commented Mar 27, 2026

this issue has been fixed, see #6829

@Soulter Soulter closed this Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants