----------------------
To create a chat bot with memory using LlamaIndex, you can use the
GPTIndexChatMemory
module. Here's an example of how to do it:
from langchain.llms import OpenAIChat
from langchain.agents import initialize_agent
from llama_index import SummaryIndex
from llama_index.langchain_helpers.memory_wrapper import GPTIndexChatMemory
# Initialize an empty SummaryIndex
index = SummaryIndex([])
# Set up the memory module
memory = GPTIndexChatMemory(
index=index,
memory_key="chat_history",
query_kwargs={"response_mode": "compact"},
return_source=True,
return_messages=True,
)
# Initialize the chat model
llm = OpenAIChat(temperature=0)
# Initialize the agent
agent_executor = initialize_agent(
[], llm, agent="conversational-react-description", memory=memory
)
# Run the agent
agent_executor.run(input="hi, i am bob")
# The query now calls the SummaryIndex memory module.
agent_executor.run(input="what's my name?")
In this example, the
GPTIndexChatMemory
module is used to retain the context of the conversation. The
memory_key
parameter is used to specify the key under which the chat history is stored. The
return_source
and
return_messages
parameters control whether source nodes or context messages are returned, respectively.
Please replace
"API_KEY_HERE"
with your actual OpenAI API key.
This code is extracted from the
LangchainDemo notebook () in the LlamaIndex GitHub repository.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------