Find answers from the community

Updated 2 months ago

And one more question I m following the

And one more question. I'm following the FAISS vectorstore documentation and manage to index documents.

However for retrieval I encounter some issues:

service_context = ServiceContext.from_defaults( llm=llm, embed_model=base_embeddings, chunk_size=512 ) vector_store = FaissVectorStore.from_persist_dir('./storage') storage_context = StorageContext.from_defaults( vector_store=vector_store, persist_dir='./storage' ) index = load_index_from_storage( storage_context=storage_context, service_context=service_context ) query_engine = index.as_query_engine() response = query_engine.query(query)

And I get the following error:

self._index.index_struct.nodes_dict[idx] for idx in query_result.ids KeyError: '1'

Any idea how to fix that?
L
B
14 comments
How did you save the index? I just ran the example notebook and it worked fine
https://gpt-index.readthedocs.io/en/stable/examples/vector_stores/FaissIndexDemo.html
vector_store = FaissVectorStore.from_persist_dir(FAISS_INDEX_PATH) storage_context = StorageContext.from_defaults( vector_store=vector_store, persist_dir=FAISS_INDEX_PATH ) # generate embeddings for each node for node in nodes: node_embedding = base_embeddings.get_text_embedding( node.get_content(metadata_mode="all") ) node.embedding = node_embedding print(node_embedding) # index the message in the vector db vector_store.add(nodes) index.storage_context.persist()
I'm mixing both the RAG from scratch and FAISS notebooks
Should probably use index.insert_nodes(nodes) I think?
That updates both the docstore, index store, and vector store
hmmm I've just copied paste the notebook in a local script, setup a new venv and it fails with same issue.
Here's my full script, I'm using my own embeddings and Azure OpenAI as llm, but the error seems to come from the index:
Name: faiss-cpu
Version: 1.7.4
Name: llama-index
Version: 0.8.28
Python 3.8.10
hmm not really sure. The only mistake I notice is that you should pass the service context back in when loading

index = load_index_from_storage(storage_context, service_context=service_context)
nope, same stuff....
Did you run the notebook on your local? or can you try to run the script above on your local to see if that's an environment issue from my side?
here is the full stacktrace:
I found the issue. You need at least 10 nodes in your index store. Since I was only loading one document with one sentence only, it creates only one node and fails.
Add a reply
Sign up and join the conversation on Discord