Spring AI's retrieval of reasoning content from deep-thinking model responses is overly cumbersome. Can you provide a unified solution for obtaining reasoning content?

Currently, processing thinking content is very cumbersome, as follows: ` AtomicBoolean atomicBoolean = new AtomicBoolean(false); clientRequestSpec .stream() .chatResponse() .doOnNext(response -> { AssistantMessage output = response.getResult().getOutput(); String text = output.getText(); if(StringUtils.hasText(text)){ atomicBoolean.set(true); } boolean noThink = atomicBoolean.get(); if(!noThink){ String reasoningContent ; if(output instanceof DeepSeekAssistantMessage deepSeekAssistantMessage){ reasoningContent = deepSeekAssistantMessage.getReasoningContent(); }else{ Map metadata = output.getMetadata(); Object o = metadata.get("reasoningContent"); reasoningContent = o != null ? o.toString() : ""; }

                }

            })
            .doOnError(error->{
                String errorMsg = error.getMessage();
            })
            .doOnComplete(() -> {})
            .subscribe();`

In LangChain4j, a dedicated interface for acquiring thinking content is provided, which is very convenient, as follows: `public interface TokenStream {

TokenStream onPartialResponse(Consumer<String> partialResponseHandler);

/**
 * The provided consumer will be invoked every time a new partial thinking/reasoning text (usually a single token)
 * from a language model is available.
 */
default TokenStream onPartialThinking(Consumer<PartialThinking> partialThinkingHandler) {
    throw new UnsupportedOperationException("not implemented");
}

TokenStream onCompleteResponse(Consumer<ChatResponse> completeResponseHandler);


TokenStream onError(Consumer<Throwable> errorHandler);

} `

I hope to get your help. thanks!

Comment From: sunyuhan1998

This is a great suggestion. However, currently we only support retrieving the reasoning content in a few models, such as DeepSeek. To implement the capability mentioned in this issue—regardless of the specific form—it is essential to first enable reasoning retrieval across all models. As far as I know, the team is already working on this enhancement.