Find answers from the community

Updated 4 days ago

Thinking through multiple query sources

@Logan M what I am asking is how does one think through scenario where you have multiple query sources. In this example you may have search_web search_internal_db search_custom_db , lets say you want to the tool to search all three, so does it search all 3 of them in parallel or should a special retriever be created instead to manage the different retriever endpoints, does that make sense?
L
c
5 comments
Most LLMs (like openai) can predict multiple tool calls in one go.
So if multiple are predicted, then it will run them in parallel
Plain Text
resp = llm.achat_with_tools(...)
tool_calls = llm.get_tool_calls_from_response(resp)


Here, in one call, resp can contain many tool calls. This is what is happening under the hood
but depends on the LLM you are using, some LLMs might predict one at a time
Ah, ok I did not realize that ok let me think this through more
Add a reply
Sign up and join the conversation on Discord