Find answers from the community

Updated 3 months ago

@Logan M what do you think of adding

what do you think of adding some way for agents and tools to have a debug call or an error call that we can customize. In my case - i have a master openai agent controlling sub agents and some query engine tools. Sometimes, the model hallucinates a function call or a tool name that doesnt exist. One way to counter this is to make tool descriptions better, but what Im looking for is smooth execution - i.e for the sub agent to not trigger an exception that tool name doesnt exist, and instead send a message back to the master agent that say it doesnt know the answer instead. Basically have an exception call handler that we can customize for these intermediate parts of the agent execution.

If a sub agent is able to return a cohesive answer when a tool isnt found to the master agent, my workflow wont just come to a halt when there is tool hallucination, if that makes sense.
s
V
7 comments
@Vish assuming this is for a model that doesn't support function calling? Or are you seeing this even with function calling?
It's probably a good thing to support in any case, tbh. Thinking about what the right API would be. Would it make sense to be able to specify a default_tool (or perhaps name it fallback_tool) which only gets called when the LLM-provided tool cannot be found?
Im using GPT4
And still seeing tool hallucinations
I guess with custom tools like in the case of langchain, we get to return our own response in a simple try except block which makes it pretty customizable
We could do the same with function tools here I suppose
But query engine tools would have to be overridden, which is the messy part i suppose
Add a reply
Sign up and join the conversation on Discord