Find answers from the community

Updated 8 months ago

I am running a multi-document agent,

I am running a multi-document agent, similar to Multi Documents Agents. The difference: I use a local Redis Vector Storage, added chat memory and now i am using ReActAgents with a local llm with llama.cpp. My problem now is, that the top level agent always wants to use the provided tools and is not creative anymore. For example, when i only query "Hey my name is Paul" it wants to use all the tools.

The query then gives errors like:
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
The Observation before this error, seems also to long, because it interrupts in the middle of the sentence. Might this error be related to the context window? There Observation errors occur again and again until the maximum iteration limit is reached and i got no result.

How would you solve this problem?
Attachment
image.png
W
j
4 comments
Are you using open-source llm?
Open-source llms are generally not good at parsing structured output.

They require a lot of prompting which may work for one case and might not for other
Do you know what would be a good approach then, to achieve a similar functionality with local llms as demonstrated in the Multi-Documents Agent Notebook? I would like to be able to compare multiple documents with each other and query details.
Add a reply
Sign up and join the conversation on Discord