Expected Behavior

I would like to be able to extract SafetyRatings from ChatResponse returned by VertexAiGeminiChatClient.call(Prompt prompt). GenerateContentResponse contains list of SafetyRatings on each Candidate, so the data is already obtained in the library.

Spring AI VertexAiGeminiChatClient loses information about Safety Ratings when returning ChatResponse

Current Behavior

VertexAiGeminiChatClient.call(Prompt prompt) returns ChatResponse which does not have information about SafetyRatings returned from Gemini.

Context

I'm not sure how many LLM models returns similar information, but it would be a useful feature to store it in generic format in Generation objects of ChatResponse, instead of loosing it in this abstraction layer.

Comment From: markpollack

Super important! The responses need a review and also tests for serialization.

Comment From: satishgutta

yes important. Looking forward for this. Any alternate suggestions to achieve this in the interim?

Comment From: markpollack

I'm afraid not. Just doing issue triage and this unfortunately will have to wait until post GA.