Find answers from the community

Updated 8 months ago

I have a document summary and vector store index. How do I search both indexes in a single

@kapa.ai I have a document summary and vector store index. How do I search both indexes in a single query?
k
c
9 comments
Please stand by @cmosguy as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To search both a document summary and vector store index in a single query, you can use the QueryEngineTool from LlamaIndex. This tool allows you to define query engines for both your summary and vector indexes and then use them to perform searches.

Here is an example of how you can set this up:

Plain Text
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
from llama_index.core import StorageContext, VectorStoreIndex
from llama_index.core import SummaryIndex
from llama_index.core.tools import QueryEngineTool

# initialize settings (set chunk size)
Settings.llm = OpenAI()
Settings.chunk_size = 1024
nodes = Settings.node_parser.get_nodes_from_documents(documents)

# initialize storage context (by default it's in-memory)
storage_context = StorageContext.from_defaults()
storage_context.docstore.add_documents(nodes)

# Define Summary Index and Vector Index over Same Data
summary_index = SummaryIndex(nodes, storage_context=storage_context)
vector_index = VectorStoreIndex(nodes, storage_context=storage_context)

# define query engines
summary_query_engine = summary_index.as_query_engine(
    response_mode="tree_summarize",
    use_async=True,
)
vector_query_engine = vector_index.as_query_engine()

summary_tool = QueryEngineTool.from_defaults(
    query_engine=summary_query_engine,
    name="summary_tool",
    description=(
        "Useful for summarization questions related to the author's life"
    ),
)

vector_tool = QueryEngineTool.from_defaults(
    query_engine=vector_query_engine,
    name="vector_tool",
    description=(
        "Useful for retrieving specific context to answer specific questions about the author's life"
    ),
)
In this example, summary_tool and vector_tool are the tools that you can use to query your summary and vector indexes respectively. You can then use these tools to perform your search.

For more details, you can refer to this source ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai is there a way to query both at the same time?
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the provided knowledge sources, there isn't a direct way to query both a Summary Index and a Vector Store Index simultaneously with a single command. These two types of indexes are queried separately in LlamaIndex.

However, you can create a function or method that takes a query as input and then queries both indexes, one after the other. Here's a simple example:

Plain Text
def query_both_indexes(query):
    summary_response = summary_query_engine.query(query)
    vector_response = vector_query_engine.query(query)
    return summary_response, vector_response


In this function, query is the question or search term you want to look up in the indexes. The function queries both the summary and vector indexes and returns the responses from both.

Remember to replace summary_query_engine and vector_query_engine with your actual query engines.

Please note that this is a basic example and you might need to adjust it based on your specific needs and the configuration of your LlamaIndex setup.

For more information on how to query these indexes, you can refer to these sources: Summary Index (), Vector Store Index ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai is there a way to make an agent research a query from one index, then queries another index with the information?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can achieve this by using the RouterQueryEngine in LlamaIndex. This allows you to route a query to different underlying indices or sub-indices based on the query context.

Here's an example of how you can set this up:

Plain Text
from llama_index.core import VectorStoreIndex
from llama_index.core.tools import QueryEngineTool
from llama_index.core.query_engine import RouterQueryEngine

# define sub-indices
index1 = VectorStoreIndex.from_documents(notion_docs)  # replace with your first index
index2 = VectorStoreIndex.from_documents(slack_docs)  # replace with your second index

# define query engines and tools
tool1 = QueryEngineTool.from_defaults(
    query_engine=index1.as_query_engine(),
    description="Use this query engine to do...",
)
tool2 = QueryEngineTool.from_defaults(
    query_engine=index2.as_query_engine(),
    description="Use this query engine for something else...",
)

# define a RouterQueryEngine over them
query_engine = RouterQueryEngine.from_defaults(
    query_engine_tools=[tool1, tool2]
)

response = query_engine.query(
    "In Notion, give me a summary of the product roadmap."
)


In this example, index1 and index2 are your two indices. The RouterQueryEngine uses an LLM to choose the best sub-index to route the query to, given the descriptions. You can replace "In Notion, give me a summary of the product roadmap." with your actual query.

For more details, you can refer to this source ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord