So the KG has a few modes. If
include_text=True
(the default), then then once the triplets are retireved for a query, the text chunk that the triplet was extracted from is included in the prompt to the LLM
num_chunks_per_query
sets a limit to how many chunks are included here, the default is 10 (they are sorted by how many times that text chunk is connected to each triplet, i.e 3 triplets might come from 1 text chunk, so we should definitely include that text chunk before limiting)
similarity_top_k
is an additional parameter for fetching triplets using embeddings. It's only used if retriever_mode="embedding" or "hybrid". The default is "keyword" though.
Definitely check out the example noteook for some exploration into this
https://gpt-index.readthedocs.io/en/latest/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.htmlThe emebd is broken, but using networkx to visualize the graph is a good option