memory
object (into a file so that I can load it later), as shown in the following sample code- llm = OpenAI(model="gpt-3.5-turbo", api_key=api_key)
memory = ChatMemoryBuffer.from_defaults(token_limit=2000)
index = create_index(text1, text2)
chat_engine = index.as_chat_engine(
llm = llm,
chat_mode="context",
memory=memory,
prefix_messages=[ChatMessage(role="system", content="..."),
ChatMessage(role="assistant", content="""Hello, and Welcome, please introduce yourself""")]
)
for i in range(5):
resp = chat_engine.chat("Hello my name is John").response
# get all the conversation chat_conversations = memory.get() # convert it into a dict format. chat_dict = memory.dict() # now save this dict into a json file # one you reload your server, pass in the conversation whenchat happens response = chat_engine.chat(message=message, chat_history=chat_history)
chat_history
could lead to token limit exceeded errors isn't it? Memory objects have a fixed max token limit.ChatMemoryBuffer
has a summarizer mechanism that allows to keep the entire chat history within the max token limits. That's why I wanted to be able to save the memory object as it is