Hey all I am trying to use a non default embedding(trying to use hugging face) with a pinecone index but I keep on getting an error that it does not access an embed param? Here is my code for reference:
gpt_index = pinecone.Index("testing") embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) index = GPTPineconeIndex(pinecone_index=gpt_index, embedding_model=embed_model)
Also is there a way to load documents from pinecone and use it as vectorstore at the same time. Main use-case I am trying to do is that I will just load all my documents from pinecone, but I am stuck on generating the query vector and the id_to_text_map, but creating index requires documents.
Is something like this enough: embed_model = index = GPTPineconeIndex.from_documents([], pinecone_index=gpt_index) result = index.query("what is life?")
gpt_index = pinecone.Index("gptindextwo") embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) index = GPTPineconeIndex([], pinecone_index=gpt_index,) result = index.query("what is life?")
I'm thinking maybe the documents have to be originally inserted with llama index? π€ then once they are there you can load the index with that empty array.
not totally sure though, I haven't used pinecone yet lol just trying to understand the code π