Hey, having experimented with langgraph previously, I was hoping/expecting to see something like
thread-level memory in LlamaIndex. However, I notice in the create-llama-app example that the entire conversation history is being passed from the frontend to the backend on each interaction. Is that the expected paradigm for chat engines in LlamaIndex?
I did see
these docs, but they're not very comprehensive tbh - and don't cover for example how to create a memory per user/conversation.