----------------------
To load and persist data of a
KnowledgeGraphIndex
in LlamaIndex, you can follow these steps:
- Load Data: You can load data from a source like Wikipedia using the
download_loader
function. Here's an example of how to load data for "Guardians of the Galaxy Vol. 3" from Wikipedia:
from llama_index import download_loader
WikipediaReader = download_loader("WikipediaReader")
loader = WikipediaReader()
documents = loader.load_data(
pages=["Guardians of the Galaxy Vol. 3"], auto_suggest=False
)
(Source:
GitHub ())
- Build the KnowledgeGraphIndex: After loading the data, you can create a
KnowledgeGraphIndex
from the documents. Here's an example:
from llama_index import ServiceContext, KnowledgeGraphIndex
from llama_index.storage.storage_context import StorageContext
from llama_index.llms import OpenAI
# define LLM
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512)
storage_context = StorageContext.from_defaults(graph_store=graph_store)
index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=2,
storage_context=storage_context,
service_context=service_context,
)
(Source:
GitHub ())
- Persist Data: Once you have your data indexed, you can persist it to disk using the
.persist()
method. This will write all the data to disk at the location specified. Here's an example:
index.storage_context.persist(persist_dir="<persist_dir>")
(Source:
Documentation ())