Find answers from the community

Updated 2 years ago

Loading a graph

How does one save a graph? Looks like

graph.storage_context.persist(persist_dir="./tree/")

Does not work?

'ComposableGraph' object has no attribute 'storage_context'
1
L
j
E
14 comments
Uhhh I don't see a storage context attribute in the code for a graph

@jerryjliu0 maybe you know how to load a graph with the new API?
@EnderEnder how are you creating the composable graph? if you take a look at this example: https://github.com/jerryjliu/llama_index/blob/main/docs/examples/composable_indices/ComposableIndices-Prior.ipynb - you can see that the storage_context can be passed during the from_indices call. The storage_context isn't actually stored on the graph
@jerryjliu0 @Logan M

You are both absolutely correct, I had figured it out prior and adjusted it to the following:

storage_context.persist(persist_dir="./tree/")

Although I am not sure why the following works as such: index.storage_context.persist()
But graph.storage_context.persist() is not set up in the same way and would fail?
oh yeah it's just because a "graph" is just a very light concept on top of existing indices. The storage context is passed to each index, so each index has access to the storage context
@EnderEnder have you reloaded the graph? I reload the graph ,it can't work
@jerryjliu0 @Logan M I get [KeyError: 'e7277d9b-fa05-4515-929a-dbbed221efe6']
new_storage_context = StorageContext.from_defaults(persist_dir=f'./storage/test')
graph2=load_graph_from_storage(new_storage_context,root_id='c5873eda-f8ce-45dd-9b6f-b1c4e5474d9a',service_context=service_context)
query_engine2 = graph2.as_query_engine()
response2 = query_engine2.query(
"What is the climate of New York City like? How cold is it during the winter?",
) when I search question,I get the error
@sunny what version of llama index do you have? This should be fixed in newer versions

Try upgrading and saving/loading again
0.6.10, I see the code of load_graph_from_storage is same as before.
The fix is not in load_graph_from_storage.

Using the latest version of llama index, you should be able to save/load a graph without issues.

The full guide is here. In important note is that each index in the graph should be initialized with the same storage context

https://gpt-index.readthedocs.io/en/latest/how_to/index_structs/composability.html
Hi, I try to this code,Istill have error.
As stated in the docs I shared, one thing you missed is each index should share the same storage context in order to persist (I know, a little annoying)

Plain Text
storage_context = StorageContext.from_defaults()

index1 = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context)
index2 = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context)

# build graph
graph = ComposableGraph.from_indices(
    GPTListIndex,
    [index1, index2 ], 
    index_summaries=[index1_summary, index2_summary],
    storage_context=storage_context
)

# set id
graph.root_index.set_index_id("my_id")

# save
graph.root_index.storage_context.persist(persist_dir="./storage")

# load
from llama_index import StorageContext, load_graph_from_storage

storage_context = StorageContext.from_defaults(persist_dir="./storage")
graph = load_graph_from_storage(storage_context, root_id="my_id")
Add a reply
Sign up and join the conversation on Discord