Find answers from the community

Updated 4 days ago

Parallel querying on the same vector store in llama-index

Hello!
I have a very quick question about retriever querying engines, since I didn't find anything in this regard in the documentation.
Simply, I have a list of files and a multiple questions for which I want an answer, considering these files as context. I know that I can use as_query_engine after indexing my content. However, I can query only one question at time. Do you know if there is any inner library support to parallel querying on the same vector store? The alternative would be to parallelize using Python processes, but it would be nice if something similar is already implemented in llama-index
W
1 comment
There is async support for query_engine which you can look at:

Plain Text
query_engine = index.as_query_engine(...)
response = query_engine.aquery("question")
Add a reply
Sign up and join the conversation on Discord