Bug description

When LLM returns 400 error message, an empty ChatCompletionChunk was returned from OpenAiApi, which is not expected, as it makes it impossible for the following process to catch or realize that a 400 error has happened.

For example, the result returned by LLM model:

{"error":{"code":"invalid_parameter_error","param":null,"message":"<400> InternalError.Algo.InvalidParameter: An assistant message with \"tool_calls\" must be followed by tool messages responding to each \"tool_call_id\". The following tool_call_ids did not have response messages: message[2].role","type":"invalid_request_error"},"id":"chatcmpl-c4b9b68f-8be2-482f-99d4-f114c4160a0c"}

The ChatCompletionChunk generated after ModelOptionsUtils.jsonToObject(content, ChatCompletionChunk.class):

ChatCompletionChunk[id=chatcmpl-c4b9b68f-8be2-482f-99d4-f114c4160a0c, choices=[], created=null, model=null, serviceTier=null, systemFingerprint=null, object=null, usage=null]

Environment 1.1.0

Steps to reproduce Make some illegal messages and send them to the LLM.

Comment From: chickenlj

And there's a similar fix from another community https://github.com/spring-ai-alibaba/spring-ai-extensions/pull/29