Find answers from the community

Updated 7 months ago

I have already created embeddings with

I have already created embeddings with OpenAI "text-embedding-3-large" and stored in supabase vector. Now, I have a table named "embeddings" where there is a column "embedding" having the vector embedding of size 1024 and a column named "docs" having the text belongs to each embedding.

I have created LlamaIndex Supabase vectorstore and but the query is showing empty response. Why is that?

vector_store = SupabaseVectorStore( postgres_connection_string=( "URI string" ), collection_name="embeddings", dimension=1024 ) index = VectorStoreIndex.from_vector_store(vector_store=vector_store) query_engine = index.as_query_engine(similarity_top_k=3) response = query_engine.query("Can you show me some funds?")
Now the response is empty.
L
H
4 comments
did you create the vector store collection with llama-index? Outside of llama-index?
I have created vector store collection outside of llama-index.
While going through some Github issue, I found out about TextNode;

Here is I have created node and index for only one embedding
text_embedding = json.loads(data["embedding"][0]) text = data["text"][0] node = TextNode(text=text, metadata={}, embedding=text_embedding) index = VectorStoreIndex(nodes=[node]) query_engine = index.as_query_engine(similarity_top_k=3) response = query_engine.query("Can you show me some funds?")
And now, I am getting an error "ValueError: shapes (1536,) and (1024,) not aligned: 1536 (dim 0) != 1024 (dim 0)" It seems like the query embedding is having 1536 dimension size. Still going through documentation but couldn't find any way to pass embedding model for the query.
Seems like I got it working;

from llama_index.embeddings.openai import OpenAIEmbedding embed_model = OpenAIEmbedding(model="text-embedding-3-large", dimensions=1024) text_embedding = json.loads(data["embedding"][0]) text = data["text"][0] node = TextNode(text=text, metadata={}, embedding=text_embedding) index = VectorStoreIndex(nodes=[node]) query_engine = index.as_query_engine(similarity_top_k=3, embed_model=embed_model) response = query_engine.query("Can you show me some funds?")

One follow up question, now in my case, we are adding new embeddings on daily basis so we will have to create an index using TextNode everytime there is a new embedding in the database?
Add a reply
Sign up and join the conversation on Discord