@Alwiiiiiiiin instead of doing
llm.chat_with_tools()
->
llm.get_tool_calls_from_response()
, you'll either need to replace that logic entirely with just a react agent, or use the lower level functions to do react more from-scratch
There's a lower level example of doing react here
https://docs.llamaindex.ai/en/stable/examples/workflow/react_agent/Note that if you are using react, its probably because you are using open-source llms? And open source llms really such at being agents for the most part, just an fyi