Find answers from the community

Updated last year

I made a index with KnowledgeGraphIndex with embedding true , what is the best query for t

At a glance
@kapa.ai I made a index with KnowledgeGraphIndex with embedding true , what is the best query for this index
k
R
7 comments
Please stand by @RUPP as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Based on the information provided in the extracts, you can query the KnowledgeGraphIndex using the query_engine object. The query_engine is created from the index object using the as_query_engine method. The query is then performed using the query method of the query_engine object.

Here is an example of how to query the index:

Plain Text
query_engine = index.as_query_engine(
    include_text=True,
    response_mode="tree_summarize",
    embedding_mode="hybrid",
    similarity_top_k=5,
)
response = query_engine.query(
    "Tell me more about what the author worked on at Interleaf",
)


In this example, the query is "Tell me more about what the author worked on at Interleaf". You can replace this with your own query. The response_mode is set to "tree_summarize", the embedding_mode is set to "hybrid", and the similarity_top_k is set to 5. These parameters can be adjusted based on your specific needs.

Please note that the query should be relevant to the data that was used to create the KnowledgeGraphIndex.

For more information, you can refer to the documentation () and the notebook () provided in the extracts.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how works Knowledge Graph RAG Query Engine
Let me check the knowledge base for you. One moment! :books:
----------------------
The Knowledge Graph RAG (Retrieve and Generate) Query Engine in Llama Index is designed to retrieve information from a Knowledge Graph based on a given task. It builds context based on entities' SubGraph related to the task.

Here's how it works:

  1. Search related Entities of the question/task: The KnowledgeGraphRAGRetriever starts by searching for entities related to the question or task.
  1. Get SubGraph of those Entities: It then retrieves a SubGraph of those entities from the Knowledge Graph. By default, it retrieves a 2-depth SubGraph.
  1. Build Context based on the SubGraph: Finally, it builds a context based on the retrieved SubGraph.
The way to search related entities could be either Keyword extraction based or Embedding based, which is controlled by the argument retriever_mode of the KnowledgeGraphRAGRetriever. Currently, the supported options are "keyword", "embedding" (not yet implemented), and "keyword_embedding" (not yet implemented).

Here is an example of how to use RetrieverQueryEngine and KnowledgeGraphRAGRetriever:

Plain Text
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.retrievers import KnowledgeGraphRAGRetriever

graph_rag_retriever = KnowledgeGraphRAGRetriever(
    storage_context=storage_context,
    service_context=service_context,
    llm=llm,
    verbose=True,
)

query_engine = RetrieverQueryEngine.from_args(
    graph_rag_retriever, service_context=service_context
)


Then you can query it like:

Plain Text
response = query_engine.query(
    "Tell me about Peter Quill?",
)
display(Markdown(f"<b>{response}</b>"))
For more details, you can refer to the documentation () and the notebook () provided in the extracts.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai when i call whtos fucntion graph_rag_retriever = KnowledgeGraphRAGRetriever(
storage_context=storage_context,
service_context=service_context,
llm=lc_llm,
verbose=True,
retriever_mode='keyword_embedding'
)

rised the follow error

# TBD: will implement this later with vector store.
--> 750 raise NotImplementedError
Add a reply
Sign up and join the conversation on Discord