Find answers from the community

Updated 2 years ago

Logan M What s the best solution that

At a glance
@Logan M What's the best solution that you've best found that handles this output parsing error, when I believe it fails to start the final response with "AI: ". This is one of the only fail points due to which a generation might fail. I've seen you discuss this before. Any heads up, I can just get rid of this?

One of the only ways I can get rid of it, is by clearing the memory, hence the prompt is much cleaner. That way it sticks to original instruction. Maybe, we can reinforce LLM's "belief" by hammering it with "You MUST start your response with AI: ", so this message doesn't just get lost in prompt. Therefore, not overlooked by LLM.

Any other approaches you've seen?
L
H
10 comments
Switching the agent type might help, as different types have different output parsing.

Also going to take a second to plug our agents and tools hub we launched today:
https://discordapp.com/channels/1059199217496772688/1059200134518427678/1128749054407475210

Did a ton of testing, and if you can use the function calling API from openAI, it's much more reliable. Here's some benchmark results across a few LLMs and react vs. function api agents
Attachment
image.png
If you are using openai and not using the function calling API for your agent, it's basically a free accuracy boost to switch
Interesting. Do you know how then to detect when the agent finishes?
It doesn't invoke the following function
Attachment
Screenshot_2023-07-13_at_5.07.35_AM.png
Previously answer reached was being detected by LLM finish and checking if the response started with "AI: "
This opens up some series of experiments for me. I’ll look into all of these. I’ve already swapped the agent, reduced description size on tools etc.

Can’t detect when the agent finishes so to turn off the stream water tap.
Yea not 100% sure how to do this with langchain callbacks. I know our function calling agent just returns a single generator for the final response, and it detects the final response when the function_call isn't part of the api response

Not sure if our react agent supports streaming yet though 🤔
Insterstingly I solved the problem using callbacks. Had to put this on the callback on Agent and not LLM.

LLM Error: 'openfda.AIassistant' does not match '^[a-zA-Z0-9-]{1,64}$' - 'messages.2.function_call.name'

Is it normal for function name to mess up?
I mean, I guess its possible 🤔
Feels rare though
Add a reply
Sign up and join the conversation on Discord