Description:
When calling the OpenAI API via ChatClient in Spring AI using model: gpt-5, the request fails with a 400 Bad Request response. The same controller method works correctly with gpt-4.1 and Anthropic models (Claude).
Steps to reproduce:
- Use
ChatClientwith aBeanOutputConverterin a Spring WebFlux controller. - Set
modelto"gpt-5". - Make a GET request to the endpoint.
Expected behavior: The request should succeed and return the structured JSON output from the GPT-5 model.
Actual behavior: Spring AI logs show:
org.springframework.web.reactive.function.client.WebClientResponseException$BadRequest: 400 Bad Request from POST https://api.openai.com/v1/chat/completions
2025-08-09T11:21:41.736+02:00 ERROR reactor-http-epoll-4 class=org.springframework.ai.chat.model.MessageAggregator - Aggregation Error
org.springframework.web.reactive.function.client.WebClientResponseException$BadRequest: 400 Bad Request from POST https://api.openai.com/v1/chat/completions
at org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:321)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s):
*__checkpoint ⇢ 400 BAD_REQUEST from POST https://api.openai.com/v1/chat/completions [DefaultWebClient]
The 400 only occurs with gpt-5.
gpt-4.1, gpt-4.1-mini and Anthropic models work as expected.
Notes:
- Spring AI version: 1.0.1
- OpenAI Java/Spring AI configuration is standard; no special headers are set.
- Possibly related to API payload format changes or missing headers required for
gpt-5. - Might require updating Spring AI's request serialization to support GPT-5’s expected input schema.
Environment:
- Java version: 21
- Spring Boot version: 3.4.3
- Spring AI version: spring-ai-bom:1.0.1
Minimal reproducible example:
@GetMapping(value = "/llm/actors/openai", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<ActorsFilms> llmActorsOpenAI() {
String actorName = "Klaus Kinski";
BeanOutputConverter<ActorsFilms> converter = new BeanOutputConverter<>(ActorsFilms.class);
return ChatClient.create(this.chatModelOpenAI)
.prompt()
.user(u -> u.text("Generate the filmography of 5 movies for {actor}. {format}")
.param("actor", actorName)
.param("format", converter.getFormat()))
.stream()
.content()
.collectList()
.map(chunks -> String.join("", chunks))
.mapNotNull(converter::convert);
}
public record ActorsFilms(String actor, List<String> films) {}
Comment From: saranshbansal
Looks like explicit output conversation for streamed response is flaky.
Using non-streaming response without explicit beanoutputconvertor works fine. (Not suggesting a solution here!)
Have you tried with 1.1.0-SNAPSHOT?
Comment From: xxx24xxx
I have now tested it with version spring-ai-bom:1.1.0-SNAPSHOT: Still the same behavior. It doesn’t work with model gpt-5, but it does work with gpt-4.1.
Comment From: amagnolo
The issue occurs even without using a BeanOutputConverter.
Example:
ChatClient.ChatClientRequestSpec request = chat.prompt("hello").options(ChatOptions.builder().temperature(1.0).build());
request.call().content()→ works as expectedrequest.stream().content()→ returns HTTP 400 whensubscribe()is called
Models affected: gpt-5, gpt-5-mini, o4-mini
Models not affected: gpt-4.1, gpt-4o-mini
So it seems related to reasoning models. The effect is the same with spring ai 1.0.1 and 1.1.0-SNAPSHOT.
Comment From: mpalourdio
I can confirm what @amagnolo says about streaming , it also affects gpt-5-chat && gpt-5-nano. I do not use any BeanOutputConverter on my side too.
Comment From: TheKuyPeD
I’m experiencing the same issue on version 1.0.1. Any updates or workaround?
Comment From: sfryslie
Something I noticed when testing locally. I think I see the issue and workaround.
The default value for temperature is 0.8 in Spring AI OpenAI, and when I am using the ChatModel interface I get this error with a better explanation.
HTTP 400 - {
"error": {
"message": "Unsupported value: 'temperature' does not support 0.8 with this model. Only the default (1) value is supported.",
"type": "invalid_request_error",
"param": "temperature",
"code": "unsupported_value"
}
}
Configuring spring.ai.openai.chat.options.temperature=1 worked for me locally. Perhaps try that in your config and using ChatModel directly to see if it has better error messaging.
Comment From: amagnolo
The default value for temperature is 0.8 in Spring AI OpenAI, and when I am using the ChatModel interface I get this error with a better explanation.
That's a different error, doesn't solve this issue.
Comment From: sfryslie
The default value for temperature is 0.8 in Spring AI OpenAI, and when I am using the ChatModel interface I get this error with a better explanation.
That's a different error, doesn't solve this issue.
Ah, googling "gpt-5 broken Spring AI" with caffeine not fully kicked in Monday morning. Hopefully someone else finds value out of my comment separately 😅
Comment From: jnoahjohnson
Any update on this?
Comment From: mingren11
Having the same problem—how is everyone resolving it?
Comment From: mpalourdio
@mingren11 There's no workaround AFAIK for the moment. Shamefully pinging @markpollack to see if this can get some love before next milestone
Comment From: xxxjjhhh
same problem, steam(), call() doesn't work with model gpt-5, gpt-5-xxx
Comment From: Okhotnik-V
Same problem, steam(), call() not working with gpt-5 model Version: spring ai 1.1.0-M1
Comment From: amagnolo
This issue is still awaiting triage, which is unfortunate given its impact on a core workflow: streaming responses from LLMs is now a standard expectation for chat applications. The bug affects the current version of the most popular inference provider’s LLM, and the high number of comments here shows how many developers are blocked by this limitation.
Comment From: amagnolo
If someone might be interested: Spring AI streaming works with gpt-5-chat-latest, since it is non-reasoning. Unfortunately, it doesn't support tool calling.
Comment From: XPlay1990
This is actually not an issue with Spring AI, but a policy of OpenAI.
If you call the completion endpoint of OpenAI directly, you will see that you have to verify your organization to use GPT-5 with streaming. With a verified organization streaming works for me with gpt-5-mini.
{
"error": {
"message": "Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.",
"type": "invalid_request_error",
"param": "stream",
"code": "unsupported_value"
}
}
Comment From: mpalourdio
Wow, you seem absolutely right, what a d**k move from OpenAI. There's no way I give persona biometric data to use their API, wtf.
https://help.openai.com/en/articles/10910291-api-organization-verification
https://community.openai.com/t/why-can-non-verified-organisations-use-gpt-5-and-gpt-5-mini-but-not-stream-them/1338294
Comment From: xxxjjhhh
after verify, still error (help me.....)
env : 1.0.1, 1.1.0-M1 api : gpt5, gpt5-mini method : stream() additional : tool
Comment From: amagnolo
after verify, still error (help me.....)
Wait 30 minutes, generate a new API key, and make sure the temperature is set to 1.0. This worked for me.
Very annoying and cumbersome move from OpenAI. I'm glad to know this isn’t Spring AI’s fault, though. Thank you very much @XPlay1990 .
Comment From: xxxjjhhh
@amagnolo @XPlay1990 thx bro!!!!,
The fundamental problem was openai. After verification, I realized I was missing the temperature setting.
appreciate it.