Find answers from the community

Home
Members
Alessandro Giagnorio
A
Alessandro Giagnorio
Offline, last seen 4 days ago
Joined February 17, 2025
Hello!
I have a very quick question about retriever querying engines, since I didn't find anything in this regard in the documentation.
Simply, I have a list of files and a multiple questions for which I want an answer, considering these files as context. I know that I can use as_query_engine after indexing my content. However, I can query only one question at time. Do you know if there is any inner library support to parallel querying on the same vector store? The alternative would be to parallelize using Python processes, but it would be nice if something similar is already implemented in llama-index
1 comment
W