Bug description After migrating from M6 to stable and adjusting the parameters/dependencies accordingly, Observability exposes input and output as null.
Environment Java21, spring-ai.1.0.0
Steps to reproduce
M6:
spring.ai.chat.observations.include-prompt=true spring.ai.chat.observations.include-completion=true
↓
stable:
spring.ai.chat.observations.log-prompt=true spring.ai.chat.observations.log-completion=true
management.tracing.sampling.probability=1.0 management.observations.annotations.enabled=true
spring.ai.chat.client.observations.log-prompt=true
Minimal Complete Reproducible example
stable:
M6:
Comment From: dev-jonghoonpark
This issue seems to be related: https://github.com/spring-projects/spring-ai/issues/3257
Comment From: zinit
This issue seems to be related: #3257
Somewhat, yes... reading that PR, I can see that the issue lies exactly here –
Mx:
public void onStop(ChatModelObservationContext context) {
TracingObservationHandler.TracingContext tracingContext = context
.get(TracingObservationHandler.TracingContext.class);
Span otelSpan = TracingHelper.extractOtelSpan(tracingContext);
if (otelSpan != null) {
otelSpan.addEvent(AiObservationEventNames.CONTENT_PROMPT.value(),
Attributes.of(AttributeKey.stringArrayKey(AiObservationAttributes.PROMPT.value()),
ChatModelObservationContentProcessor.prompt(context)));
}
}
stable:
public void onStop(ChatModelObservationContext context) {
logger.info("Chat Model Prompt Content:\n{}", ObservabilityHelper.concatenateStrings(prompt(context)));
}
Anyway, for now, I'm reverting to previous implementation as CustomChatPrompt/CompletionObservationHandlers... and I'm eagerly looking forward to continued compatibility with external observability tools on the Spring AI side.
Comment From: DannySortino
Just to add the same issue is on both..
ChatModelPromptContentObservationHandler - for the prompt (input) side. ChatModelCompletionObservationHandler - for the completion (output) side.
Just reverting the implementation it locally and loading into context as bean was enough for me to at least see it on my spans for output context for observability platforms.
Comment From: PeatBoy
Has this problem been solved? I have the same problem.
Comment From: PeatBoy
Has this problem been solved? I have the same problem.
Just construct a bean yourself and replace it.
Comment From: Steffen911
Hey all, One of the Langfuse maintainers here as this issue surfaced for some of our users (e.g. https://github.com/orgs/langfuse/discussions/7612#discussioncomment-13637665).
Would it possible to reconsider the move to logs-only from the spring-ai side? I feel like there is an ongoing discussion within the OTel SemConv GenAI group and while the current spec is very log/event focused there are new changes that are less clear-cut (e.g. https://github.com/open-telemetry/semantic-conventions/pull/2179/files#diff-39110517221227cd4af9e9f76c9d569c53a1c715c38203d735f06021bcd70e0cR4 which mentions that the attributes could live on the Span or the Event). Other semantic conventions like OpenInference also rely heavily on span attributes.
From our perspective as an LLM telemetry backend, receiving the span information and the prompt/completion content in two separate API calls makes processing and merging the two very expensive at scale.
One could add a new ObservationFilter implementation within the project to work around this example, but having framework support would make it more intuitive to configure and would allow to highlight this within the Spring AI observability documentation.
Comment From: santannaf
@Steffen911
Your implementation works, and it's quite simple, right? Very good.
It also works for LangWatch via OpenTelemetry.
Comment From: Steffen911
@santannaf Yes, adding the class works well. It's just that we usually aim to be as easy to add as possible and having some custom magic happening on each export might be brittle.
Comment From: santannaf
@Steffen911 Perfect, I think in future versions spring time should put this in a natural way