If you want to automatically run function calls, use a prebuilt agent
from llama_index.core.agent import FunctionCallingAgent
agent = FunctionCallingAgent.from_tools(tools, llm=llm)
agent.chat("Hello!")
This will automatically do an entire agent loop (tool calling, writing a final response, etc.)
The advantage of the "boilerplate" code I was linking above is you can get more control over how things are called (which tbh, is extremely helpful, better error handling, ensuring certain things happen when a tool is called, injecting arguments, etc.)