Bug description
A simple toolcalling for asking current datetime, but i receive toolCallArguments cannot be null or empty
Environment 1.0.0-RC1 spring-ai-starter-model-openai
Steps to reproduce
@Slf4j
public class DatetimeTools {
private static final DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
@Tool(description = "Query current date and time")
public String getCurrentDatetime() {
String currentDatetime = LocalDateTime.now().format(formatter);
return currentDatetime;
}
}
client.prompt()
.user("What time is it now? ")
.tools(new DatetimeTools())
.stream()
.content()
.subscribe(log::info);
Expected behavior It's ok when i use sync calling, but fail on stream calling. Anything i missed?
Minimal Complete Reproducible example
Comment From: ThomasVitale
@lanyuanxiaoyao I tried running your same example, but it works correctly for me. Still, in order to be resilient to situations where a model implementation uses null toolCallArguments
instead of {}
, I relaxed the constraint and handled it internally in ToolCallingObservationContext
. See https://github.com/spring-projects/spring-ai/pull/3236
Thanks for reporting this issue!
Comment From: lanyuanxiaoyao
@lanyuanxiaoyao I tried running your same example, but it works correctly for me. Still, in order to be resilient to situations where a model implementation uses null
toolCallArguments
instead of{}
, I relaxed the constraint and handled it internally inToolCallingObservationContext
. See #3236我尝试运行你的相同示例,但它对我来说工作正常。尽管如此,为了能够适应模型实现使用 nulltoolCallArguments
而不是{}
的情况,我放宽了约束并在ToolCallingObservationContext
内部处理了它。参见 #3236Thanks for reporting this issue!感谢您报告此问题!
Wow, thanks for your quick response to this issue. I originally thought this issue had nothing to do with the model. In fact, I am using vLLM + Qwen3 0.6B for test.
Comment From: ThomasVitale
@lanyuanxiaoyao could you elaborate on your setup? I can see you're using spring-ai-starter-model-openai
. So, you use the OpenAI module to call vLLM? Thanks!
Comment From: lanyuanxiaoyao
@ThomasVitale OK
docker run \
--privileged=true \
--shm-size=4g \
--rm \
-p 8000:8000 \
-v /root/llm/models:/models \
-e VLLM_CPU_KVCACHE_SPACE=5 \
-e VLLM_CPU_OMP_THREADS_BIND=0-11 \
-e VLLM_CPU_MOE_PREPACK=0 \
public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.8.5 \
--model /models/Qwen3-0.6B \
--port 8000 \
--host 0.0.0.0 \
--dtype bfloat16 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--uvicorn-log-level debug \
--enable-reasoning \
--reasoning-parser deepseek_r1
And if this is your first time using vLLM + Spring AI, this question may save your time :) https://github.com/spring-projects/spring-ai/issues/3124
Comment From: ThomasVitale
@lanyuanxiaoyao thanks for the additional context
The change I linked in my previous comment is part of 1.0.0, which was released today.