eg, toolA, toolB, toolC , when execute toolA, then use toolA result as parameter to execute toolB, use toolB result as parameter to execute toolC.
Comment From: ilayaperumalg
@zhankun Could you elaborate your use case? While LLM chooses the tool to execute based on the context, the client still holds the control over the tool definition and how the tool gets executed. But, the LLM may not decide the chaining in the way we expect.
If you want a specific sequence of execution, then, you can very well define your tool in such a way that the toolA's result gets fed into another operation which the client controls. But, this may not be another toolB invoked by the LLM.
Did you try to explore prompt chaining instead? In this case, you can feed the response from the LLM into the next prompt that gets sent to the LLM. Please see this example for chain workflow agentic pattern.
Comment From: zhankun
@zhankun Could you elaborate your use case? While LLM chooses the tool to execute based on the context, the client still holds the control over the tool definition and how the tool gets executed. But, the LLM may not decide the chaining in the way we expect.
If you want a specific sequence of execution, then, you can very well define your tool in such a way that the toolA's result gets fed into another operation which the client controls. But, this may not be another toolB invoked by the LLM.
Did you try to explore prompt chaining instead? In this case, you can feed the response from the LLM into the next prompt that gets sent to the LLM. Please see this example for chain workflow agentic pattern.
Thank you very much, I have learned according to your sharing.
Comment From: CodeCodeAscension
您能详细解释一下您的用例吗?虽然 LLM 根据上下文选择要执行的工具,但客户端仍然控制工具定义和工具的执行方式。但是,LLM 可能不会以我们预期的方式决定链接。
如果你想要一个特定的执行顺序,那么,你可以很好地定义你的工具,这样 toolA 的结果就会被输入到客户端控制的另一个作中。但是,这可能不是 LLM 调用的另一个 toolB。
您是否尝试过探索 prompt chaining?在这种情况下,您可以将来自 LLM 的响应馈送到发送到 LLM 的下一个提示中。请参阅此示例以了解 chain workflow agentic pattern。
Hello, I'm a sophomore intern. Recently, I've been in charge of a requirement aimed at helping users solve problems in a closed-loop manner. Here's my approach: For a problem A that troubles users, I designed a process. For each node in the process, I need users to provide some parameters, and at the same time, I prepare some tools to solve the problem at the current node. Based on the results returned by the tools, the process will enter a specific branch, and then users will be asked to provide some data again. Similarly, these nodes also have corresponding tools. The ultimate goal is that I can solve the user's problem on their behalf according to different situations.
The ideal is beautiful, but the reality is harsh. During this process, I encountered issues where the large model cannot fully follow the process outlined in the prompt. There are problems like incorrect parameter extraction, premature tool calls, and even with the same conversation, unexpected results may occur, making the stability worrying. These difficulties seem almost unsolvable, so much so that I began to think about the leap from answering questions to solving problems. Are there better solutions? Perhaps overly detailed splitting isn't necessarily a good approach. I'm quite confused and hope to hear different ideas from experts.