When using the DeepSeekChatModel.call() method in Spring AI (non-streaming mode), the following error occurs:
HTTP 422 - Failed to deserialize the JSON body into the target type: messages[0].role: unknown variant USER, expected one of system, user, assistant, tool
Bug description
After debugging, I found that the role field in the request body is set as "USER" (uppercase), but DeepSeek (which follows OpenAI-compatible format) expects lowercase values such as "user", "assistant", "system", etc.
Environment - Spring AI version: v1.0.1 - Spring Boot version: 3.2.5 - JDK version: 21
Steps to reproduce
1. Set up a DeepSeekChatModel with a valid API key.
2. Use call() to make a non-streaming chat completion request.
3. Observe the error response (HTTP 422).
spring:
ai:
deepseek:
api-key: sk-xxx
base-url: https://api.deepseek.com
chat:
options:
model: deepseek-chat
Expected behavior
The role field in the generated message should be lowercase (e.g. "user"), in compliance with OpenAI/DeepSeek schema.
Minimal Complete Reproducible example @Test void testDeepSeekChatCallRoleCaseIssue() { // This is a simplified example showing the issue in non-streaming mode DeepSeekChatModel model = new DeepSeekChatModel( new DeepSeekApi("sk-xxx", RestClient.builder() .baseUrl("https://api.deepseek.com") // OpenAI-compatible base URL .build() ) );
// Single-turn request using .call(), fails with HTTP 422 due to role casing
Prompt prompt = new Prompt(List.of(new Message(Role.USER, "Hello")));
ChatResponse response = model.call(prompt);
}
HTTP 422 - Failed to deserialize the JSON body into the target type: messages[0].role: unknown variant USER, expected one of system, user, assistant, tool
Comment From: sunyuhan1998
Could you please try using the Spring AI 1.1.0-SNAPSHOT version? The following code is what I tested with 1.1.0-SNAPSHOT, and it works correctly:
public static void main(String[] args) {
DeepSeekApi api = DeepSeekApi.builder()
.baseUrl("https://api.deepseek.com")
.apiKey("<API_KEY>")
.build();
DeepSeekChatModel model = DeepSeekChatModel.builder().deepSeekApi(api).build();
DeepSeekChatOptions deepSeekChatOptions = DeepSeekChatOptions.builder().build();
Prompt prompt = new Prompt(List.of(new UserMessage("What are the planets in the solar system? ")), deepSeekChatOptions);
ChatClient client = ChatClient.builder(model).build();
client.prompt(prompt).stream()
.chatResponse()
.doOnNext(resp -> System.out.print(resp.getResult().getOutput().getText()))
.blockLast();
}
Comment From: abssyyy
Thanks for the suggestion!
I tested again using spring-ai version 1.1.0-SNAPSHOT, with the following configuration:
spring: ai: deepseek: api-key: sk-xxx base-url: https://api.deepseek.com chat: options: model: deepseek-chat
I'm using the auto-configured deepSeekChatModel bean, and calling the model with:
chatModel.call(new UserMessage("Hello"));
Unfortunately, the issue still persists. The error returned is:
HTTP 422 - Failed to deserialize the JSON body into the target type: messages[0].role: unknown variantUSER, expected one ofsystem,user,assistant,tool
However, I did find a workaround:
When I explicitly build a Prompt and pass it to the model, it works:
Prompt prompt = new Prompt(List.of(new UserMessage("Hello")));
chatModel.call(prompt);
It seems that the simplified call(UserMessage) overload is still serializing the role as "USER" (uppercase), while the Prompt-based version correctly uses "user" (lowercase).
So this might be a bug specific to the shortcut method call(UserMessage) in combination with DeepSeek's strict role handling.
Comment From: sunyuhan1998
@abssyyy can you provide me with the minimized code that reproduces the problem you are talking about?I tried to reproduce as you said but it works fine, here is my test code:
public static void main(String[] args) {
DeepSeekApi api = DeepSeekApi.builder()
.baseUrl("https://api.deepseek.com")
.apiKey("<API_KEY>")
.build();
DeepSeekChatModel model = DeepSeekChatModel.builder().deepSeekApi(api).build();
String response = model.call(new UserMessage("Hello!"));
System.out.println(response);
}
Response:
Hello! š How can I assist you today?
Comment From: abssyyy
Thanks for the clarification!
You're right ā when I manually create the DeepSeekChatModel using:
DeepSeekApi api = DeepSeekApi.builder()
.baseUrl("https://api.deepseek.com")
.apiKey("<API_KEY>")
.build();
DeepSeekChatModel model = DeepSeekChatModel.builder().deepSeekApi(api).build();
String response = model.call(new UserMessage("Hello!"));
System.out.println(response);
it works perfectly fine.
However, when I use Spring Boot auto-configuration to inject the ChatModel bean like this:
@Resource(name = "deepSeekChatModel")
private ChatModel chatModel;
@GetMapping("/chat/ali/call")
public String call(@RequestParam String query) {
return chatModel.call(new UserMessage(query)); // ā 422 error
}
it still fails with:
HTTP 422 - messages[0].role: unknown variant `USER`, expected one of `system`, `user`, `assistant`, `tool`
So it seems like the issue only occurs when using the auto-configured ChatModel bean provided by Spring Boot. My guess is that the injected bean is using a default ChatModel implementation (maybe a wrapper or proxy) that serializes Role.USER as "USER" (uppercase), which DeepSeek rejects.
Comment From: sunyuhan1998
@abssyyy I have tried my best to reproduce your scenario using Spring's auto-injection to create the ChatModel, but it still works correctly.
I've created a demo: https://github.com/sunyuhan1998/spring-ai-repro-demo/tree/main/model/deepseek-demo. If it's convenient for you, please try running it to see if it works properly.
Using it is very simple: just fill in your API KEY in model/deepseek-demo/src/main/resources/application.yaml, then run model/deepseek-demo/src/main/java/com/example/spring/ai/DeepSeekApp.java.
Once the service starts, you can test it by running curl http://localhost:12345/call in your terminal to check whether the model output is returned correctly.
Comment From: abssyyy
@sunyuhan1998 Thanks for your response.
I tested with your demo using chatModel.call() and it works fine. I also tried importing spring-ai-starter-model-deepseek version 1.1.0-SNAPSHOT into another project and had no issues either.
However, in my main project, I have the following dependencies:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-deepseek</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-vector-store-redis</artifactId>
</dependency>
My application.yml includes both DeepSeek and OpenAI configurations:
spring:
ai:
deepseek:
base-url: https://api.deepseek.com
api-key: sk-xxx
openai:
api-key: sk-xxx
base-url: https://dashscope.aliyuncs.com/compatible-mode
embedding:
api-key: sk-xxx
base-url: https://dashscope.aliyuncs.com/compatible-mode
options:
model: text-embedding-v4
dimensions: 1536
encoding-format: float
user: "yanyy"
vectorstore:
redis:
prefix: "deepseek:"
index-name: "deepseek-index"
Since DeepSeek currently doesnāt support embedding models, Iām using Aliyun's embedding API via the OpenAI-compatible interface. Thatās the reason I have both OpenAI and DeepSeek starters configured.
Hereās the strange behavior I encountered:
When I inject the model using:
@Resource(name = "deepSeekChatModel")
private ChatModel chatModel;
The chatModel.call() method throws an error (422 Unprocessable Entity), but chatModel.stream() works fine.
However, when I manually construct the model like this:
DeepSeekApi api = DeepSeekApi.builder()
.baseUrl("https://api.deepseek.com")
.apiKey("sk-xxx")
.build();
DeepSeekChatModel model = DeepSeekChatModel.builder().deepSeekApi(api).build();
Then both call() and stream() work as expected.
I'll investigate further on my end, but I wanted to share this in case it's helpful or points to a potential issue with auto-configuration when multiple models are used.
Thanks again!
Comment From: sunyuhan1998
I'll investigate further on my end, but I wanted to share this in case it's helpful or points to a potential issue with auto-configuration when multiple models are used.
Thanks again!
No problem! If you have any further findings or questions, feel free to share them here!