----------------------
To use a loaded index outside of its scope in another API, you would typically serialize the index and load it in the context of the other API. However, the specific method would depend on the APIs you're working with.
In the context of LlamaIndex, you can persist and load data using the
StorageContext
. Here's an example of how you can load data:
from llama_index import (
load_index_from_storage,
load_indices_from_storage,
load_graph_from_storage,
)
storage_context = StorageContext.from_defaults(
docstore=SimpleDocumentStore.from_persist_dir(persist_dir="<persist_dir>"),
vector_store=SimpleVectorStore.from_persist_dir(
persist_dir="<persist_dir>"
),
index_store=SimpleIndexStore.from_persist_dir(persist_dir="<persist_dir>"),
)
# load a single index
# need to specify index_id if multiple indexes are persisted to the same directory
index = load_index_from_storage(storage_context, index_id="<index_id>")
# don't need to specify index_id if there's only one index in storage context
index = load_index_from_storage(storage_context)
# load multiple indices
indices = load_indices_from_storage(storage_context) # loads all indices
indices = load_indices_from_storage(
storage_context, index_ids=[index_id1, ...]
) # loads specific indices
# load composable graph
graph = load_graph_from_storage(
storage_context, root_id="<root_id>"
) # loads graph with the specified root_id
(Source:
LlamaIndex Documentation ())
Please replace
<persist_dir>
and
<index_id>
with your actual directory and index id.