Support for max_completion_tokens was introduced in ChatCompletionsOptions class (Azure SDK for Java - package com.azure.ai.openai.models) last year.

Reference - https://github.com/Azure/azure-sdk-for-java/pull/43037

The maxTokens value is now deprecated in favor of max_completion_tokens, and is not compatible with o1 series models.

Reference - https://learn.microsoft.com/en-us/java/api/com.azure.ai.openai.models.chatcompletionsoptions?view=azure-java-preview#com-azure-ai-openai-models-chatcompletionsoptions-setmaxtokens(java-lang-integer)

Comment From: iAMSagar44

I have submitted a pull request for this change - https://github.com/spring-projects/spring-ai/pull/3305

Comment From: ChoHadam

Hi @iAMSagar44 !

Thank you for raising this important issue. I've been working on an implementation that addresses @markpollack's feedback from PR #3305 about avoiding breaking changes.

My approach: - Keeps all existing maxTokens methods (no breaking changes) - Adds maxCompletionTokens as a separate, independent field - Both options can coexist - users choose based on their model type - Includes comprehensive unit tests

The key difference is that we don't need to deprecate or remove maxTokens - both parameters serve different purposes: - maxTokens for standard models - maxCompletionTokens for reasoning models (o1, o3, etc.)

Would you like to review my implementation? I can open a PR that addresses the maintainer's concerns while preserving your original intent.

Thanks again for identifying this need!