Find answers from the community

Updated 2 months ago

index = KnowledgeGraphIndex.from_

index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=30,
service_context=service_context,
include_embeddings=True,
)

when i set True for include_embeddings where this data is stored and how can i load this embeddings ?
L
R
3 comments
its stored in memory

You can save/load the index to get all the data

Plain Text
index.storage_context.persist(persist_dir="./storage")

from llama_index import StorageContext, load_index_from_storage
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context, service_context=service_context)
this embeddings is relative the nodes?
the embeddings are for the extracted triplets
Add a reply
Sign up and join the conversation on Discord