Find answers from the community

Updated 3 months ago

Hi guys, I'm facing a recurrent issue in

Hi guys, I'm facing a recurrent issue in an Agent I'm developing. This agent has a first LLM call to summarize the chat history, but sometimes even when the chat has only the first message and no history, the LLM returns a tool call with an input that changes completely the meaning of the query. Do you guys have any idea about how I can get more assertive on this issue? Is this something that I can fix with a system prompt? I'm trying to use gpt-3.5-turbo to keep costs down... I'll add some images to the thread that describes the issue with more details.
C
L
3 comments
I thiiink this is more of a system prompt thing (the issue is that you'd rather have the original input passed to the tool right?)
Add a reply
Sign up and join the conversation on Discord