@Roland Tannous from the docs, seems like that function calls are not parallel, but we can call multiple tools within a single turn of User and Agent dialogue.
My need is to call different tools in parallel to reduce the latency
@Roland Tannous I tried using LLMCompiler and i've got 2 problems.
The first one, is that i need to generate a larger response (right know it respond with a couple of words. Is there a way to change the final prompt? The second one is that I think the stream_chat methos has not yet been implemented: got a NotImplementedError