Find answers from the community

Updated 6 months ago

Logan M when we create a chat engine

At a glance

The community member's post asks how to persist "chat history" when creating a chat engine, similar to how there are many database integrations available for vector stores. They also inquire about handling the building and deployment of a chatbot for a large organization on a dataset, specifically on Azure.

In the comments, another community member responds that they are working on adding better features for persisting chat history. They mention that in the latest version of llama-index, you can use json_str = chat_engine._memory.to_json() and memory = ChatMemoryBuffer.from_json(json_str) to handle this, and there are also to/from dict options available.

when we create a chat engine, how is the "chat history" persisted? just like we have so many DB integrations available for vector store, what about the conversation history? How do we handle building a chat bot for a large organisation on a dataset and deploying it on azure?
L
1 comment
we are working on adding some better features for persisting chat history

In the latest version of llama-index, you can do

Plain Text
json_str = chat_engine._memory.to_json()
memory = ChatMemoryBuffer.from_json(json_str)


There's also to/from dict as well
Add a reply
Sign up and join the conversation on Discord