Find answers from the community

Home
Members
thiomajid
t
thiomajid
Offline, last seen 3 months ago
Joined September 25, 2024
Hi everyone,

I'm working on a RAG app using fastapi and llama-index and I'd like to know if any of you knows a way to handle to concurrent requests given that the chat engine is a shared global variable. Right now I'm using python Lock class but it seem to slow down the process.

I'm open to any suggestion, thanks in advance
5 comments
t
L