We’re currently developing a tool-based intelligent feature using Spring AI and have encountered a limitation.

In the current implementation, OllamaChatModel.internalStream() does not expose any way to identify whether a streamed ChatResponse is generated from a tool_call. As a result, the frontend developers have no way of distinguishing whether a particular response is from a tool or a regular model output, making it difficult to render tool-specific UI behavior.

After reviewing the source code, we noticed that tool-related responses are not marked or separated in the streaming flow, and there's no hook or metadata exposed to help with this.

🙏 Feature Request Would it be possible to:

Add metadata or a flag in ChatResponse indicating whether it is a tool-generated message?

Or expose tool_calls (as in OpenAI's API spec) via ChatResponse when available?

Or allow developers to customize the parsing of the raw response, for more advanced routing?

This feature would greatly improve the ability to integrate tool responses cleanly into frontend workflows.

Thanks for the great work on Spring AI! Looking forward to your thoughts.

Image

https://www.antgroup.com/news-media/press-releases/1714118400000

data:{"result":{"metadata":{"finishReason":"returnDirect","contentFilters":[],"empty":false},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":"[{\"productAmount\":\"888.88\",\"productId\":\"1000\",\"productName\":\"productName1\"},{\"productAmount\":\"666.88\",\"productId\":\"1002\",\"productName\":\"productName2\"},{\"productAmount\":\"88.88\",\"productId\":\"1003\",\"productName\":\"productName3\"}]"}},"metadata":{"id":"","model":"qwen3:30bqwen3:30b","rateLimit":{"requestsReset":"PT0S","tokensReset":"PT0S","requestsLimit":0,"tokensLimit":0,"tokensRemaining":0,"requestsRemaining":0},"usage":{"promptTokens":441,"completionTokens":202,"totalTokens":643},"promptMetadata":[],"empty":false},"results":[{"metadata":{"finishReason":"returnDirect","contentFilters":[],"empty":false},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":"[{\"productAmount\":\"888.88\",\"productId\":\"1000\",\"productName\":\"productName1\"},{\"productAmount\":\"666.88\",\"productId\":\"1002\",\"productName\":\"productName2\"},{\"productAmount\":\"88.88\",\"productId\":\"1003\",\"productName\":\"productName3\"}]"}}]}

Image

Comment From: 192902649

Image

Comment From: 192902649

The desired behavior is to have Spring AI mark tool-generated responses with an explicit field like:

data:{"result":{"metadata":{"finishReason":"returnDirect","contentFilters":[],"empty":false},"output":{"messageType":"TOOLS","metadata":{"messageType":"TOOLS"},"toolCalls":[“getProducts”],"media":[],"text":"[{"productAmount":"888.88","productId":"1000","productName":"productName1"},{"productAmount":"666.88","productId":"1002","productName":"productName2"},{"productAmount":"88.88","productId":"1003","productName":"productName3"}]"}},"metadata":{"id":"","model":"qwen3:30bqwen3:30b","rateLimit":{"requestsReset":"PT0S","tokensReset":"PT0S","requestsLimit":0,"tokensLimit":0,"tokensRemaining":0,"requestsRemaining":0},"usage":{"promptTokens":441,"completionTokens":202,"totalTokens":643},"promptMetadata":[],"empty":false},"results":[{"metadata":{"finishReason":"returnDirect","contentFilters":[],"empty":false},"output":{"messageType":"TOOLS","metadata":{"messageType":"TOOLS"},"toolCalls":[“getProducts”],"media":[],"text":"[{"productAmount":"888.88","productId":"1000","productName":"productName1"},{"productAmount":"666.88","productId":"1002","productName":"productName2"},{"productAmount":"88.88","productId":"1003","productName":"productName3"}]"}}]}

Comment From: wilocu

take