Image

调用gpt4.1报错,大模型输出的结果被截断导致反序列化失败,怀疑是completionTokens超过了1024,但我并没有显式设置token任何长度限制的参数。

是spring ai本身对模型输出长度做了限制吗?请问怎么解决?

Here is the English translation:

Calling GPT-4.1 results in an error. The output from the large model is being truncated, causing deserialization to fail. I suspect that the completionTokens exceeded 1024, but I did not explicitly set any token length limit parameters.

Does Spring AI itself impose a limit on the output length of the model? How can I resolve this issue?