Find answers from the community

Updated 6 months ago

Hi guys, I'm facing a recurrent issue in

At a glance

The community member is facing an issue with an agent they are developing. The agent has an initial LLM call to summarize the chat history, but even when the chat has only the first message and no history, the LLM returns a tool call with an input that changes the meaning of the query. The community member is asking for ideas on how to address this issue and whether it can be fixed with a system prompt. They are trying to use GPT-3.5-turbo to keep costs down.

One of the comments suggests that this may be more of a system prompt issue, where the community member would rather have the original input passed to the tool.

Hi guys, I'm facing a recurrent issue in an Agent I'm developing. This agent has a first LLM call to summarize the chat history, but sometimes even when the chat has only the first message and no history, the LLM returns a tool call with an input that changes completely the meaning of the query. Do you guys have any idea about how I can get more assertive on this issue? Is this something that I can fix with a system prompt? I'm trying to use gpt-3.5-turbo to keep costs down... I'll add some images to the thread that describes the issue with more details.
C
L
3 comments
I thiiink this is more of a system prompt thing (the issue is that you'd rather have the original input passed to the tool right?)
Add a reply
Sign up and join the conversation on Discord