Bug description When using Spring AI with Bedrock Converse API and the models like 'openai.gpt-oss-20b-1:0', 'openai.gpt-oss-120b-1:0', the DefaultChatClient does not correctly extract the assistant response.
The model returns multiple Generation objects in the ChatResponse: - The first Generation has textContent = null and finishReason = end_turn. - The second Generation contains the actual assistant message finishReason = end_turn and textContent = "Hello! How can I help you today?".
However, DefaultCallResponseSpec.content() and DefaultCallResponseSpec.entity() only take the textContent from the first generation, which leads to null being returned instead of the real model output.
Environment Bedrock Converse API + openai.gpt-oss-20b-1:0 or openai.gpt-oss-120b-1:0
Steps to reproduce
var content = chatClientRegistry
.get(getType())
.prompt("Hello")
.call()
.content();
System.out.println(content); // returns null
Inspecting chatResponse() shows two generations: - Generation[0]: textContent = null, finishReason = end_turn - Generation[1]: textContent = "Hello! How can I help you today?", finishReason = end_turn
Expected behavior While using Bedrock Converse API + openai.gpt-oss-20b-1:0 content() method can return actual model output instead of null.