Find answers from the community

Updated 3 months ago

How do i persist and re create a

How do i persist and re-create a CondenseQuestionChatEngine ?

In particular, my use case is that I'd like to allow a user to have multiple chat instances that they can resume at any time. Ideally, an instance of a chat engine could be serliazed (to disk, a db, or some other form of storage), and deserialized at a later date.
L
D
5 comments
The condense chat engine has two main components
  1. The query engine/index
  2. The chat history
The index is easy enough to persist. You can also get the chat history by using chat_engine.chat_history. This returns a list of ChatMessage objects which are serializable.

Using these two objects, you can load/save and re-create the chat engine.

Plain Text
index.storage_context.persist(persist_dir="./storage")
chat_history = chat_engine.chat_history
chat_history_json_str = [x.json() for x in chat_history]
# write json str to folder?

# load
from llama_index.llms import ChatMessage
from llama_index import StorageContext, load_index_from_storage
index = load_index_from_storage(StorageContext.from_defaults(persist_dir="./storage"))
chat_history = [ChatMessage.parse_raw(x) for x in chat_history_json_str]

chat_engine = CondenseQuestionChatEngine.from_defaults(query_engine=query_engine, chat_history=chat_history)


Kind of tedious tbh. I can probably get a save/load funcationality out in the next few days πŸ™
great, that's perfect. thanks.
so that worked well for CondenseQuestionChatEngine, now how about OpenAIAgent (that seems to be the type of the engine that gets returned when you call index.as_chat_engine(chat_mode='openai'))?
I see there's a from_tools function on OpenAIAgent...but no from_defaults.
Right, because an agent needs tools to work with

In this case, that can be a query engine tool, or one of our tools from llamahub, or you can create your own tool!

(See agent tools at the top)
https://llamahub.ai/
Add a reply
Sign up and join the conversation on Discord