Find answers from the community

Updated last month

Agent struggles to provide direct answer to user's question about Canadian budget.

At a glance
does anyone knows how to avoid this behavior?
question to the agent: can you tell me about canadian budget
answer:
Plain Text
TheThe current language of the user is English. I need to use a tool to help me answer the question about the Canadian budget. Action: query_engine_tool Action Input: {"input": "Canadian budget 2023"}

(instead of using the tool, it prints the internal reasoning)
L
d
9 comments
seems like it should have used the tool, based on the code in the output parser?

Although it may have failed to parse, since the format isn't quite correct according to the regex
Attachments
image.png
image.png
imo open source llms suck at being react agents. Try using llama3.1/2/3 ?
interesting! i didn't know it was a problem with parsing. it indeed happened a lot more with gpt-o4-mini. I will try to adjust the question / system header. thanks
Oh, if you are using openai, I wouldn't even bother with react. Use the FunctionCallingAgent imo
Plain Text
from llama_index.core.agent import FunctionCallingAgent

agent = FunctionCallingAgent.from_tools(tools, llm=OpenAI(...))
thanks. managed to reproduce it. using input "hi" within the ReactAgent breaks the agent.. using the way you mentioned did work nicely!
while this agent has the streaming method, it doesn't allow us to use it. is there a way to use it directly in the agent instead of using it at the query_engine (https://docs.llamaindex.ai/en/latest/module_guides/deploying/query_engine/streaming/#streaming) ?
Ah yea it's not implemented just yet for the generic function calling agent (i need to add that lol)

Try the OpenAIAgent

Plain Text
from llama_index.agent.openai import OpenAIAgent

agent = OpenAIAgent.from_tools(tools, llm=llm)

resp = agent.stream_chat("hello world")
for t in resp.response_gen:
  print(t, end="", flush=True)
thank you very much!!
Add a reply
Sign up and join the conversation on Discord