RouterQueryEngine
in LlamaIndex. It is composed on top of other query engines as tools. Here is an example of how to use it:from llama_index.core.query_engine import RouterQueryEngine from llama_index.core.selectors import PydanticSingleSelector from llama_index.core.selectors.pydantic_selectors import Pydantic from llama_index.core.tools import QueryEngineTool from llama_index.core import VectorStoreIndex, SummaryIndex # define query engines ... # initialize tools list_tool = QueryEngineTool.from_defaults( query_engine=list_query_engine, description="Useful for summarization questions related to the data source", ) vector_tool = QueryEngineTool.from_defaults( query_engine=vector_query_engine, description="Useful for retrieving specific context related to the data source", ) # initialize router query engine (single selection, pydantic) query_engine = RouterQueryEngine( selector=PydanticSingleSelector.from_defaults(), query_engine_tools=[ list_tool, vector_tool, ], ) query_engine.query("<query>")
chat_history
property available in various chat engines such as BaseChatEngine
, CondenseQuestionChatEngine
, ContextChatEngine
, CondensePlusContextChatEngine
, and SimpleChatEngine
. Here is an example of how to get the chat history:chat_history = self.chat_history
RouterQueryEngine
itself does not directly handle chat history. The chat history is typically managed at the chat engine level, not the query engine level. CondenseQuestionChatEngine
or CondensePlusContextChatEngine
maintain the conversation history through a memory module like ChatMemoryBuffer
. They generate a standalone question from the conversation context and the last message, and then pass this to the query engine for retrieval. Here's an example:from llama_index.core.memory import ChatMemoryBuffer from llama_index.core.chat_engine import CondensePlusContextChatEngine memory = ChatMemoryBuffer.from_defaults(token_limit=3900) chat_engine = CondensePlusContextChatEngine.from_defaults( index.as_retriever(), memory=memory, llm=llm, context_prompt=( "You are a chatbot, able to have normal interactions, as well as talk" " about the Kendrick and Drake beef." "Here are the relevant documents for the context:\n" "{context_str}" "\nInstruction: Use the previous chat history, or the context above, to interact and help the user." ), verbose=True, )
CondensePlusContextChatEngine
uses the ChatMemoryBuffer
to maintain the chat history. The chat engine then uses this history to generate a standalone question, which is passed to the query engine (in this case, a retriever) for processing.RouterQueryEngine
in a similar setup, you would replace index.as_retriever()
in the CondensePlusContextChatEngine
initialization with your RouterQueryEngine
instance. The chat engine would then handle the chat history and pass the appropriate queries to the RouterQueryEngine
.RouterQueryEngine
, PandasQueryEngine
, chat history, and metadata. However, the integration of these components is not straightforward and requires a good understanding of how each component works in LlamaIndex.PandasQueryEngine
by loading your data into a pandas DataFrame and then passing it to the PandasQueryEngine
. This query engine can then be used to retrieve information based on the queries.RouterQueryEngine
to manage multiple query engines, including the PandasQueryEngine
. You would initialize the RouterQueryEngine
with the PandasQueryEngine
as one of its tools.CondensePlusContextChatEngine
and a memory module like ChatMemoryBuffer
to maintain the chat history.PandasQueryEngine
. You would need to customize the PandasQueryEngine
to include this metadata in its responses.``
Please note that this is a very simplified example and might not work as-is. You would need to customize it based on your specific requirements and data. Also, this example does not show how to include metadata for citation in the responses from the
PandasQueryEngine. You would need to customize the
PandasQueryEngine` to include this metadata in its responses.PandasQueryEngine
in LlamaIndex is designed to operate on a pandas DataFrame. It doesn't directly interact with documents or metadata in the same way as a document retrieval-based query engine would. Instead, it performs operations on the DataFrame based on the queries it receives.PandasQueryEngine
. One way to do this might be to include an additional column in your DataFrame that contains metadata about the source of each row. Then, you could modify the PandasQueryEngine
to include this metadata in its responses.PandasQueryEngine
. It's also worth noting that this approach might not be suitable for all use cases, as it assumes that each row in your DataFrame corresponds to a distinct document or source.PandasQueryEngine
, please refer to the LlamaIndex documentation () and source code ().PandasQueryEngine
to include metadata about the source of each row in its responses. import pandas as pd from llama_index.core.query_engine import PandasQueryEngine # Assume you have a DataFrame 'df' with a 'metadata' column df = pd.DataFrame({ 'column1': ['data1', 'data2', 'data3'], 'column2': ['more_data1', 'more_data2', 'more_data3'], 'metadata': ['source1', 'source2', 'source3'] }) # You could subclass PandasQueryEngine to include metadata in its responses class CustomPandasQueryEngine(PandasQueryEngine): def query(self, query): # Use the parent class's query method to get the response response = super().query(query) # Add the metadata to the response # This assumes that 'response' is a DataFrame that includes a 'metadata' column response_with_metadata = response.copy() response_with_metadata['metadata'] = df.loc[response.index, 'metadata'] return response_with_metadata # Then you can use your custom query engine custom_query_engine = CustomPandasQueryEngine(df)
RouterQueryEngine
or a chat engine. You would need to integrate it into your setup accordingly.