Find answers from the community

Updated last year

I have a persisted vector store and I'm loading it as StorageContext. How can I get/iterat

@kapa.ai I have a persisted vector store and I'm loading it as StorageContext. How can I get/iterate all the documents from that store?
How can I get the embedding value from a Document?
How can I query a VectorStoreIndex using raw embedding vector instead of string?
k
3 comments
Please stand by @AvishWagde as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Based on the provided extracts, here's how you can load a persisted vector store as a StorageContext:

Plain Text
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores import SimpleVectorStore

# Load the persisted vector store
vector_store = SimpleVectorStore.from_persist_path("<path_to_persisted_vector_store>")


However, the extracts do not provide information on how to iterate over all the documents in a vector store. You might need to refer to the LlamaIndex documentation or source code for more details on this.

As for getting the embedding value from a Document, the extracts do not provide explicit information on this. Typically, in a vector store, each document is associated with an embedding vector, but the method to retrieve it would depend on the specific implementation of the vector store.

To query a VectorStoreIndex using a raw embedding vector, the extracts do not provide a direct example. However, based on the API reference, it seems that the query method of a vector store accepts a VectorStoreQuery object, which can include a query_embedding parameter. This suggests that you might be able to query using a raw embedding vector. Here's a hypothetical example:

Plain Text
from llama_index.storage.vector_store import VectorStoreQuery

# Assuming `embedding_vector` is your raw embedding vector
query = VectorStoreQuery(query_embedding=embedding_vector, similarity_top_k=10)
query_result = vector_store.query(query)
Please note that this is a hypothetical example and may not work as-is. You should refer to the LlamaIndex API reference () for more accurate information.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord