The community member is experiencing issues with a react-agent and large language models (LLMs), where the LLMs often respond with the message "Thought: (Implicit) I can answer without any more tools!" and then provide a poor answer, such as an example in a different programming language than requested. The comments suggest that open-source LLMs are generally not well-suited for being agents, and that providing more instructions to the model in the prompt may help, but the results can still be unstable.
does anyone have any advice on the react-agent when with some LLMS it typically responds a lot with "Thought: (Implicit) I can answer without any more tools!" even if it then goes on to provide me a garbage answer..for example i ask for an example of code from my documentation and instead it outputs a a similar java example (not even the same language i asked for the example in)
ah youre correct, adding more information made it work 2/5 times, but honestly seems weirdly unstable in terms of repeating the same tool calls each time