There are several objects that should be returned as metadata, but the current implementation was developed a long time ago and only handled the metadata that was returned from making Azure OpenAI calls. A hashmap at least should be added, but also a comparison of various model APIs should be done to look for common properties that are returned.

Comment From: markpollack

The messages for internal tool calls and other internal 'thoughts' in reasoning models could go into the ChatGenerationMetadata as they also represent return values from the model, but not the primary return value.

Comment From: andresssantos

Does this enable the ability to obtain information about a tool call that was handled internally by the LLM?

For example, I want to know when the model makes a tool call, even if it is processed internally. I understand there is an option to proxy tool calls, but I don't want to handle the tool call myself; I simply want to know when the model makes a tool call and which tool was used, so that this information can be displayed on my frontend for the user.

Is there a way to achieve this? Will this feature be implemented as part of this issue, or is it a different situation?