Description:
When calling the OpenAI API via ChatClient
in Spring AI using model: gpt-5
, the request fails with a 400 Bad Request
response. The same controller method works correctly with gpt-4.1
and Anthropic models (Claude).
Steps to reproduce:
- Use
ChatClient
with aBeanOutputConverter
in a Spring WebFlux controller. - Set
model
to"gpt-5"
. - Make a GET request to the endpoint.
Expected behavior: The request should succeed and return the structured JSON output from the GPT-5 model.
Actual behavior: Spring AI logs show:
org.springframework.web.reactive.function.client.WebClientResponseException$BadRequest: 400 Bad Request from POST https://api.openai.com/v1/chat/completions
2025-08-09T11:21:41.736+02:00 ERROR reactor-http-epoll-4 class=org.springframework.ai.chat.model.MessageAggregator - Aggregation Error
org.springframework.web.reactive.function.client.WebClientResponseException$BadRequest: 400 Bad Request from POST https://api.openai.com/v1/chat/completions
at org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:321)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s):
*__checkpoint ⇢ 400 BAD_REQUEST from POST https://api.openai.com/v1/chat/completions [DefaultWebClient]
The 400
only occurs with gpt-5
.
gpt-4.1
, gpt-4.1-mini
and Anthropic models work as expected.
Notes:
- Spring AI version: 1.0.1
- OpenAI Java/Spring AI configuration is standard; no special headers are set.
- Possibly related to API payload format changes or missing headers required for
gpt-5
. - Might require updating Spring AI's request serialization to support GPT-5’s expected input schema.
Environment:
- Java version: 21
- Spring Boot version: 3.4.3
- Spring AI version: spring-ai-bom:1.0.1
Minimal reproducible example:
@GetMapping(value = "/llm/actors/openai", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<ActorsFilms> llmActorsOpenAI() {
String actorName = "Klaus Kinski";
BeanOutputConverter<ActorsFilms> converter = new BeanOutputConverter<>(ActorsFilms.class);
return ChatClient.create(this.chatModelOpenAI)
.prompt()
.user(u -> u.text("Generate the filmography of 5 movies for {actor}. {format}")
.param("actor", actorName)
.param("format", converter.getFormat()))
.stream()
.content()
.collectList()
.map(chunks -> String.join("", chunks))
.mapNotNull(converter::convert);
}
public record ActorsFilms(String actor, List<String> films) {}
Comment From: saranshbansal
Looks like explicit output conversation for streamed response is flaky.
Using non-streaming response without explicit beanoutputconvertor works fine. (Not suggesting a solution here!)
Have you tried with 1.1.0-SNAPSHOT?
Comment From: xxx24xxx
I have now tested it with version spring-ai-bom:1.1.0-SNAPSHOT
: Still the same behavior. It doesn’t work with model gpt-5, but it does work with gpt-4.1.
Comment From: amagnolo
The issue occurs even without using a BeanOutputConverter
.
Example:
ChatClient.ChatClientRequestSpec request = chat.prompt("hello").options(ChatOptions.builder().temperature(1.0).build());
request.call().content()
→ works as expectedrequest.stream().content()
→ returns HTTP 400 whensubscribe()
is called
Models affected: gpt-5
, gpt-5-mini
, o4-mini
Models not affected: gpt-4.1
, gpt-4o-mini
So it seems related to reasoning models. The effect is the same with spring ai 1.0.1 and 1.1.0-SNAPSHOT.
Comment From: mpalourdio
I can confirm what @amagnolo says about streaming , it also affects gpt-5-chat
&& gpt-5-nano
. I do not use any BeanOutputConverter
on my side too.