I used the chat memory and a question arose: I asked the large model some questions, the large model answered, and I was not satisfied, so I wanted to regenerate. That is, call the request dialog interface again.,But what I want is to reply to the last big model that I'm not satisfied with, don't add the context.,Can this requirement be supported by springai?,What should I do.
Comment From: ilayaperumalg
@newzhiMing This sounds like an evaluator-optimiser pattern which is explained in this example using Spring AI. Could you check and re-open the issue if you are looking for something different. Thanks!
Comment From: newzhiMing
@ilayaperumalg Sorry for my question, not described clearly enough. I think it's a chat memory issue. In a nutshell, I'm in a conversation, and I want to control the context that comes in.
For example, if the model generates an answer that I am not satisfied with, and I choose to regenerate it, it means that I don't want to put the last unsatisfactory model answer into my context.
Comment From: newzhiMing
My essential question is whether I can control the incoming context with the chat memory feature. For example, the large model answered me three times. I'm not happy with the third answer and choose to regenerate, in that context I only want to put in the first two answers
Comment From: YunKuiLu
@newzhiMing Sure, you can add a method in your ChatMemory implementation class that only deletes a specific message, and then call the LLM again after the deletion.
Another option is to handle it manually like this: (personally, I prefer this way)
chatClient.prompt()
.system(systemMessage)
.messages(historyMessages) // Only includes the message you need to send to the LLM
.user(userMessage);
If you still have any questions, feel free to reopen the issue
Comment From: newzhiMing
@YunKuiLu Okay, thank you very much! I'm going to put it into practice!