Find answers from the community

Updated 4 months ago

Loading from pinecone

At a glance
hey!

I loaded documents with this docs (https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/PineconeIndexDemo.html) and now I have a questions. How to load documents via Pinecone instead of creating indexes every time?

I already have all indexes in my Pinecone account, so how can I query it with Llama and have quick access to them?
L
k
s
7 comments
To connect back to the existing index, setup the vector store/storage context to point to the existing pinecone data, and then load the index like this

index = GPTVectorStoreIndex([], storage_context=storage_context)
Hey @Logan M! This doesn't work in case of Milvus vector store,it says-" list index out of range."

Am i doing something wrong
Attachment
rn_image_picker_lib_temp_a024fdca-89b2-4d22-98ae-1a6a46675c15.jpg
Nvm, Just saw another thread where you mentioned that you fixed this issue and it will be there in next release
@Logan M Is it possible to return sources (I use Pinecone) with answer?

index.as_query_engine(similarity_top_k=3) Which param should I use for do that?
Yea, you should be able to check the response object response.source_nodes
Add a reply
Sign up and join the conversation on Discord