Find answers from the community

Updated 4 months ago

index = KnowledgeGraphIndex.from_

At a glance

The community member created a KnowledgeGraphIndex with the include_embeddings=True option, and is asking where the embeddings are stored and how to load them. Another community member responded that the embeddings are stored in memory, and provided code to save and load the index, which would include the embeddings. A third community member asked if the embeddings are relative to the nodes, and another community member clarified that the embeddings are for the extracted triplets.

index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=30,
service_context=service_context,
include_embeddings=True,
)

when i set True for include_embeddings where this data is stored and how can i load this embeddings ?
L
R
3 comments
its stored in memory

You can save/load the index to get all the data

Plain Text
index.storage_context.persist(persist_dir="./storage")

from llama_index import StorageContext, load_index_from_storage
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context, service_context=service_context)
this embeddings is relative the nodes?
the embeddings are for the extracted triplets
Add a reply
Sign up and join the conversation on Discord