Find answers from the community

Updated last month

Thinking through multiple query sources

At a glance

The community member is asking how to handle a scenario where there are multiple query sources (e.g., search_web, search_internal_db, search_custom_db) and the tool needs to search all of them. The comments suggest that most large language models (LLMs) like OpenAI can predict multiple tool calls in one go, and if multiple calls are predicted, they will run in parallel. One community member explains that the response from the LLM can contain multiple tool calls, and the tool calls can be extracted from the response. However, the behavior may depend on the specific LLM being used, as some LLMs might predict one tool call at a time.

@Logan M what I am asking is how does one think through scenario where you have multiple query sources. In this example you may have search_web search_internal_db search_custom_db , lets say you want to the tool to search all three, so does it search all 3 of them in parallel or should a special retriever be created instead to manage the different retriever endpoints, does that make sense?
L
c
5 comments
Most LLMs (like openai) can predict multiple tool calls in one go.
So if multiple are predicted, then it will run them in parallel
Plain Text
resp = llm.achat_with_tools(...)
tool_calls = llm.get_tool_calls_from_response(resp)


Here, in one call, resp can contain many tool calls. This is what is happening under the hood
but depends on the LLM you are using, some LLMs might predict one at a time
Ah, ok I did not realize that ok let me think this through more
Add a reply
Sign up and join the conversation on Discord