Find answers from the community

Updated 12 months ago

Is there some easy way to run several

At a glance
Is there some easy way to run several instances of the same llama index object in parallel? I’m trying to run 3 instances of sub question query engines at the same time, however there are always some errors or infinite loops with the event loops. Defining an async function and running it with asyncio.run made it just as slow as sequential execution would.
L
2 comments
you should be able to just run concurrently

Plain Text
tasks = [query_engine.aquery(query) for query_engine in query_engines]
responses = await asyncio.gather(*tasks)
something like that
Add a reply
Sign up and join the conversation on Discord