Find answers from the community

Updated 2 months ago

Hi there, I am following this tutorial

Hi there, I am following this tutorial to have a Retrieval-Augmented Agent, https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent_retrieval/
However I am wondering whether the async query works here? Assume that I want the Agent to use 5 tools.
Plain Text
obj_index = ObjectIndex.from_objects(
    query_engine_tools,
    index_cls=VectorStoreIndex,
)
agent = OpenAIAgent.from_tools(
    llm=self.llm_azure_gpt35,
    tool_retriever=obj_index.as_retriever(similarity_top_k=5),
    verbose=True,
)

response = await agent.aquery("Use 5 tools to plan the route")

From my observation on the progress, I feel the Agent uses the tools one by one, which is time-consuming.
L
l
4 comments
async just means async execution, not using tools more than once

Whether or not tools get called sequentially or not is up to the LLM and how it generates function calls

Each message the LLM gets a chance to generate function calls. If it generates one call, we run it, execute the tool, and then loop and see what the LLM decides to do next
The LLM can invoke more than one tool at once, but that implies it knows the input to each tool at the time of the LLM call
If you want it to plan a dag or something, try the query plan tool
Thanks for the reply! I understand it now.
Add a reply
Sign up and join the conversation on Discord