Hi guys, I'm facing a recurrent issue in an Agent I'm developing. This agent has a first LLM call to summarize the chat history, but sometimes even when the chat has only the first message and no history, the LLM returns a tool call with an input that changes completely the meaning of the query. Do you guys have any idea about how I can get more assertive on this issue? Is this something that I can fix with a system prompt? I'm trying to use gpt-3.5-turbo to keep costs down... I'll add some images to the thread that describes the issue with more details.