I am trying to make some queries against my custom query engine in parallel. I have tried to use both concurrent futures and multiprocessing and both are throwing me random errors. My pipeline works just fine when I simply use a for loop but I want to speed up the processing time as waiting for my response is ineffective. If anyone has used a query engine in parallel with no problems please let me know!
@Logan M also i am looking to setup error tracking with llamaIndex. Is there a list of errors I could recieve from llamaindex or would the errors be from the underlying llm chosen? Is there any examples of it?