It would be great to have the best of both worlds. The robustness of tool calling and the 'thinking' aspect of React Agents. Would this be something I could basically use workflows for? Essentially just use a modified React system prompt with Function calling:
https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/When looking at a function calling agent, it doesn't look like the output of one tool is fed into the next one. Is that correct?
It looks like the functions are all called and then returned as part of the AgentChatResponse where sources is the tool output, which makes sense. Per examples, it seems like you have to have the User return the result and ask a follow-up question.