Find answers from the community

Updated last week

Challenges with using Claude Sonnet 3.5 and ReactAgent

Anyone found that while using Claude Sonnet 3.5 it doesn't seem to work very well with the ReactAgent? It will hallucinate many steps at once so it says,

Plain Text
Action:
Thought:

Action:
Thought:
Observation:


Working through it, but wow. Just doesn't seem to work too well
a
L
14 comments
It would be great to have the best of both worlds. The robustness of tool calling and the 'thinking' aspect of React Agents. Would this be something I could basically use workflows for? Essentially just use a modified React system prompt with Function calling:

https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/

When looking at a function calling agent, it doesn't look like the output of one tool is fed into the next one. Is that correct?

It looks like the functions are all called and then returned as part of the AgentChatResponse where sources is the tool output, which makes sense. Per examples, it seems like you have to have the User return the result and ask a follow-up question.
the output of the tool is fed back into the conversation history, so the LLM will use that to inform any future tool calls

imo you can get the "thinking" aspect by adding a "justification" field for tools too
Ah I see, I missed this part where it is adding it to the memory:

https://github.com/run-llama/llama_index/blob/aa192413a398b5330d23a4901a42976419bb7128/llama-index-core/llama_index/core/agent/function_calling/step.py#L205

Plain Text
        function_message = ChatMessage(
            content=str(tool_output),
            role=MessageRole.TOOL,
            additional_kwargs={
                "name": tool_call.tool_name,
                "tool_call_id": tool_call.tool_id,
            },
        )
        sources.append(tool_output)
        memory.put(function_message)
imo you can get the "thinking" aspect by adding a "justification" field for tools too

Interesting, yeah that would be an easy implementation to make it a required param
Was looking at this, from what I can see it's not possible for it to take the output from one tool and put it into another tool in 1 turn

https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent_parallel_function_calling/

It looks like the LLM is only involved at each turn and I only see the input from a tool able to be hard coded values as opposed to having a function have an input as the output of another function.

Am I understanding this right? Is it possible for a tool to use the output of a different tool call in 1 turn?
Attachment
image.png
If one tool needs the output of another tool, why not just combine them and make a single tool?
Thats generally what I advise people to do
But also, if you need super custom stuff, this is why we made workflows, so you can make your own thing πŸ’ͺ
Awesome, thanks for clarifying everything @Logan M ! It seems like ReactAgents essentially got superseded once function calling agents became a thing. I don't see how accuracy would be different if a justification field is being used.

Also created this issue to update the docs based on what I understand:
https://github.com/run-llama/llama_index/issues/18035
@Logan M Closed the issue. Thanks for your help looking through it.

Now I just need to understand how it can do that in a single turn πŸ€”
@Logan M You were also right that what it's doing is calling both at once and doing the addition in advance
Attachment
image.png
heh yes -- sneaky llms πŸ˜‚
Makes me both laugh and cry haha
Add a reply
Sign up and join the conversation on Discord