The community member is experiencing an issue with the llama_index chat_engine where it returns messy responses when multiple requests are made at the same time. Another community member suggests that this issue was fixed in version 0.7.22, and the community member using version 0.7.17 plans to update to the latest version to resolve the problem. The community members confirm that version 0.7.22 solved the issue and that the system can handle multiple requests at the same time.
Does llama_index chat_engine can handle multiple request at the same time? I have encountered that if i make two request at the exact same time the responses get messy. Seems like its returning pieces of texts from the different requests.
If not What would be the best possible solution to address this?