I want to load the sentence + metadata index from a persisted directory (where I have saved the relevant docstore, indexstore and vectorstore json files). Does anyone have a code snippet how we might do so? Because when I call :
query_engine = sentence_index.as_query_engine( similarity_top_k=2, # the target key defaults to window to match the node_parser's default node_postprocessors=[ MetadataReplacementPostProcessor(target_metadata_key="window") ], ) window_response = query_engine.query( "Who is xxx?" ) print(window_response)
I later get:
AttributeError: 'list' object has no attribute 'as_query_engine'
Hi, I am building a RAG app on top of llamaindex, and would want to store data from knowledge graph and summary index (the json files that are created when you persist the storage_context) remotely to a cloud Db (something like MongoDB). I cant find any good example of how to do so (since I created the knowledge graph by running code on parallel on AWS), and would now just want to save the indexes/jsons on a remote DB. If some of you habe faced similar issues, help would be much appreciated!!
Hi All, What the current consensus on whats the best way to ingest finance data (complex pdfs with tables and excel files) in llamaindex? Currently I find the best result with the UnstructuredReader. Does anyone have a different experience? Thanks