Please do a quick search on GitHub issues first, the feature you are about to request might have already been requested.
Expected Behavior
The DefaultToolCallingManager should process tool calls in parallel rather than sequentially when multiple independent tool calls are present in an AssistantMessage. This would significantly improve performance for scenarios requiring multiple concurrent AI tool interactions.
Current Behavior
The DefaultToolCallingManager processes tool calls sequentially using a simple for-loop:
for (AssistantMessage.ToolCall toolCall : assistantMessage.getToolCalls()) {
// Sequential processing
}
This creates performance bottlenecks when dealing with multiple independent tool calls that could be executed concurrently.
Context
Proposed Solution - Add parallel execution capability to DefaultToolCallingManager - Ensure proper error handling and response ordering
Comment From: 192902649
While parallel execution improves efficiency, the order of tool execution still needs to be taken into account ?
Comment From: rafaelrddc
@192902649 While parallel execution may affect the order of responses, we don't theoretically need to enforce response order because the models include an ID that ties each response back to its specific function call.