fix: handle tool_calls correctly in streaming mode for OpenAI provider#6354
fix: handle tool_calls correctly in streaming mode for OpenAI provider#6354LovieCode wants to merge 3 commits intoAstrBotDevs:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug in the streaming mode for OpenAI-compatible providers, where AI agents failed to execute tool calls. The fix ensures that the system correctly parses and processes different types of tool call objects, allowing the AI to utilize its tools effectively and deliver complete, intended responses to users. This significantly improves the reliability and functionality of AI interactions involving external tools in a streaming context. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- When matching tools,
hasattr(tool_func, "name")is redundant given thattool_funccomes fromgetattr(tool_call, "function", None)and is only used in equality checks; consider simplifying the condition to justif tool_func and tool.name == tool_func.namefor clarity. - In the tool-call handling branch, if
tool_func_argsisNonethe current logic will appendNonetoargs_ls; if downstream code expects a dict, it may be safer to skip such calls or default to{}to keep the list contents consistent.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- When matching tools, `hasattr(tool_func, "name")` is redundant given that `tool_func` comes from `getattr(tool_call, "function", None)` and is only used in equality checks; consider simplifying the condition to just `if tool_func and tool.name == tool_func.name` for clarity.
- In the tool-call handling branch, if `tool_func_args` is `None` the current logic will append `None` to `args_ls`; if downstream code expects a dict, it may be safer to skip such calls or default to `{}` to keep the list contents consistent.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request correctly addresses a bug where tool calls were not being executed in streaming mode for OpenAI-compatible providers. The fix involves using getattr for safer attribute access on tool call objects, which can have different structures, and adds checks for potentially missing id attributes to prevent errors. The changes are logical and effectively solve the issue. I have one minor suggestion to improve code readability by simplifying a conditional check.
|
我看了一下 openai sdk的代码, class ParsedFunctionToolCall(ChatCompletionMessageFunctionToolCall):
function: ParsedFunction
"""The function that the model called."""ParsedFunctionToolCall 继承自 ChatCompletionMessageFunctionToolCall, class ChatCompletionMessageFunctionToolCall(BaseModel):
"""A call to a function tool created by the model."""
id: str
"""The ID of the tool call."""
function: Function
"""The function that the model called."""
type: Literal["function"]
"""The type of the tool. Currently, only `function` is supported."""正常来说是有type的 def parse_chat_completion(
*,
response_format: type[ResponseFormatT] | completion_create_params.ResponseFormat | Omit,
input_tools: Iterable[ChatCompletionToolUnionParam] | Omit,
chat_completion: ChatCompletion | ParsedChatCompletion[object],
) -> ParsedChatCompletion[ResponseFormatT]:
if is_given(input_tools):
input_tools = [t for t in input_tools]
else:
input_tools = []
choices: list[ParsedChoice[ResponseFormatT]] = []
for choice in chat_completion.choices:
if choice.finish_reason == "length":
raise LengthFinishReasonError(completion=chat_completion)
if choice.finish_reason == "content_filter":
raise ContentFilterFinishReasonError()
message = choice.message
tool_calls: list[ParsedFunctionToolCall] = []
if message.tool_calls:
for tool_call in message.tool_calls:
if tool_call.type == "function":
tool_call_dict = tool_call.to_dict()
tool_calls.append(
construct_type_unchecked(
value={
**tool_call_dict,
"function": {
**cast(Any, tool_call_dict["function"]),
"parsed_arguments": parse_function_tool_arguments(
input_tools=input_tools, function=tool_call.function
),
},
},
type_=ParsedFunctionToolCall,
)
) |
|
this issue has been fixed, see #6829 |
Problem / 问题:
When using streaming mode with OpenAI-compatible providers, tool calls are not executed. The AI generates a response but no tools are called, and no final reply is sent to the user.
Root Cause / 根本原因:
In streaming mode,
ChatCompletionStreamState.get_final_completion()returnsParsedFunctionToolCallobjects wheretypeisNone, instead ofChatCompletionMessageFunctionToolCallwheretypeis"function". The original code checkedtool_call.type == "function", which always failed for parsed tool calls, causing tool matching to fail.Fix / 修复:
Use
getattrto safely accessfunction.nameandargumentsinstead of relying on thetypefield. This makes the code work with bothParsedFunctionToolCall(type=None) andChatCompletionMessageFunctionToolCall(type="function").Modifications / 改动点
Modified
astrbot/core/provider/sources/openai_source.pygetattrfor type-safe access tofunction.nameandargumentstool_call.idto prevent KeyError in extra_content handlingThis is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
Test scenario: Ask AI to send an image in streaming mode
Before fix:
Saving chunk state errorwarning appearsAfter fix:
Tool calls are now executed correctly in streaming mode.
Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.Summary by Sourcery
Fix OpenAI streaming provider tool-call parsing so tools are correctly invoked and extra content is handled safely.
Bug Fixes: