---------------------- Hmm, I don't know enough to give you a confident answer yet. The knowledge sources provided do not contain specific information about error handling for parallel requests in a FastAPI and Uvicorn setup with LlamaIndex.
However, it's worth noting that LlamaIndex does support parallel and asynchronous operations as shown in the Query Pipeline with Async/Parallel Execution () example.
For more specific help, you might want to check the error message you're getting and refer to the documentation of FastAPI () and Uvicorn (), or the LlamaIndex documentation (). ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
@Logan M in the old version (0.9.13) the webserver and llama index were able to handle processes parallel automatically without defining anything as async... if i do that now then i get the error from above... Is this anything you guys know of?