How did you make the graph?
graph = ComposableGraph.load_from_disk(index_file_name, service_context=request.service_context);
graph_config = GraphToolConfig(
graph=graph,
name=f"Graph Index",
description=new_desc,
tool_kwargs={"return_direct": True},
query_configs=get_query_configs()
)
toolkit = LlamaToolkit(
graph_configs=[graph_config]
)
graph=graph
And how did you make that?
graph = ComposableGraph.load_from_disk(index_file_name, service_context=request.service_context);
index_file_name is composite index
I convert bunch of vectors indexes into list and make grpah out of that
it was working in my other environment....let me take a look
Let me try it on a notebook ..right now flow is through multiple layers
I am getting this error but not always - my setup is Graph that is built over list of vector indexes.
graph = ComposableGraph.load_from_disk(comp_indx_file_name, service_context = service_context);
new_desc = 'Query this toool first.'
graph_config = GraphToolConfig(
graph=graph,
name=f"Graph Index",
description="Use this tool if the question is about golf rates",
tool_kwargs={"return_direct": True},
query_configs=get_query_configs()
)
toolkit = LlamaToolkit(
graph_configs=[graph_config]
)
memory = None
agent_chain = create_llama_chat_agent(
toolkit,
llm_predictor.llm,
memory=memory,
verbose=True
);
resp_str = agent_chain.run(input="What are the golf rates?",chat_history=None)
query_configs = [
{
"index_struct_type": "list",
"query_mode": "default",
"query_kwargs": {
"response_mode": "tree_summarize",
"verbose": True
}
},
{
"index_struct_type": "simple_dict",
"query_mode": "default",
"query_kwargs": {
"similarity_top_k": 3,
# "include_summary": True
},
#"query_transform": decompose_transform
}
The snippet of graph is {"index_struct": {"type": "composite", "data": {"all_index_structs": {"dc179591-09a1-49cb-a71a-d34cc4840043": {"type": "simple_dict", "data": {"index_id": "dc179591-09a1-49cb-a71a-d34cc4840043", "summary": "\nThis document provides information for new residents of the Town of Herndon, VA. It includes information on setting up utilities, registering to vote, accessing town services, dog licenses, the Herndon Senior Center, the Herndon Post Office, public schools and libraries, real estate taxes, recreation, youth sports leagues, setting up utilities, vehicle information, voter registration, weekly trash collection and recycling, the Herndon ON the GO mobile app, and the Town of Herndon e-newsletters.", "nodes_dict": {"3ce18aaf-dd5a-4519-a851-c24f7199c6bf": "3ce18aaf-dd5a-4519-a851-c24f7199c6bf"}, "doc_id_dict": {"40762b8c-99b4-49d8-8a93-6bd9b4fe2601": ["3ce18aaf-dd5a-4519-a851-c24f7199c6bf"]}, "embeddings_dict": {}}}, "dcfbd7d0-1009-43f2-b747-65d0b3c49b1c": {"type": "simple_dict", "data": {"index_id": "dcfbd7d0-1009-43f2-b747-65d0b3c49b1c", "summary": "\nThis document provides information about Herndon Centennial Golf Course, located in Herndon, Virginia. It includes details about the 18-hole, par 71 layout, directions to the course, food and beverage options, a golf shop, practice area, scorecard, and upcoming events. It also provides contact information for the golf course.", "nodes_dict": {"ff2684bd-0b68-428e-9c9d-eb747a16767b": "ff2684bd
it looks okay to me - the child index is simple_dict
I have no idea what's going on there π
My best guess is (optionally) update to the latest llama index version, but also to rebuild the graph instead of loading from disk
Thanks for the suggestion. I think that might be the reason. I see there is minor version mismatch. I think it is important to create and query the index with the same version including minor version. The reason lot of keys in jsons etc have to match with keys being used in the code π
Yea exactly! Usually I think library updates usually don't break things, but sometimes it happens π
that was it by the way. I upgraded the index and used the same version for creation and query . I do not see the errors anymore