----------------------
To build a chat engine over two retrievers, you would need to create an ensemble of the two retrievers. Unfortunately, the provided extracts do not contain specific information on how to create an ensemble of retrievers in LlamaIndex.
However, once you have your ensemble retriever, you can use it to build a chat engine similar to how it's done in the provided extracts. Here's an example of how to build a chat engine with a single retriever:
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.memory import ChatMemoryBuffer
# Load data and build index
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
# Configure chat engine
memory = ChatMemoryBuffer.from_defaults(token_limit=1500)
chat_engine = index.as_chat_engine(
chat_mode="context",
memory=memory,
system_prompt="You are a chatbot, able to have normal interactions, as well as talk about an essay discussing Paul Grahams life.",
)
(Source:
Chat Engine - Context Mode ())
In this example,
index
is a
VectorStoreIndex
instance which is a type of retriever. If you have an ensemble retriever, you would replace
index
with your ensemble retriever.
Please refer to the LlamaIndex documentation or source code for more details on how to create an ensemble of retrievers.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------