In short term
Query_Engine:
It queries your data and pass it to LLM and you get a response.
Chat Engine:
it is a stateful analogy of a Query Engine. By keeping track of the conversation history, it can answer questions with past context in mind.
So maybe past conversations are altering your queries in such a way that your final answer is getting hampered.
Read more here:
https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/root.html