Find answers from the community

Updated 2 days ago

Debugging vector inserting error with BedrockEmbedding

Hey all, currently trying to debug the following error:

{"error":"Wrong input: Vector inserting error: expected dim: 1536, got 1024"}

I am using an embedding model with a dimension of 1024 via BedrockEmbedding, and have set the embed_model in both Settings and the VectorStoreIndex.from_vector_store() method to be this embedding model, yet for some reason it is still expecting the OpenAI embedding model. Am I missing something or any advice on how to debug?
D
L
10 comments
Yeah this is an existing vector store, here are a few snippets, what else would be helpful?


st.session_state.embed_model = BedrockEmbedding(client=bedrock_runtime, model_name=model)

Settings.embed_model = st.session_state.embed_model

vector_store = QdrantVectorStore("name",client=st.session_state.client, enable_hybrid=True, )

index = VectorStoreIndex.from_vector_store(vector_store=vector_store, embed_model=st.session_state.embed_model,)

retriever = st.session_state.index.as_retriever(similarity_top_k=100, filters=filters_db)
It seems like st.session_state.embed_model does not match the embed model that was used to create this index πŸ‘€

The full traceback would probably show some error in the qdrant vector store, because you are querying with a different embedding model than was used to create the index
yeah the thing is it has to be the same embed model because I only have access to one due to security reasons..
is there an embedding dimension parameter set somewhere to a default value of 1536 that is not being overriden in my case because I forgot to set something?
Hmm, nope, qdrant doesnt have the dim as param
I really think your DB somehow has 1536 dim vectors in it
How else would it be expecting that dim?
Try creating a fresh db and see what happens imo
So strange because I really do only have access to cohere's v3 model which has a dimension of 1024. I know the defaults in llamaindex are usually OpenAI so I just assumed it was dropping to the default somewhere and assuming a dimension of 1536 from one of OpenAI's embedding models. Will go ahead and re-embed everything and see what happens.
If you don't have an openai key or access to openai, then you'd be getting an API key error before ever getting this error
Add a reply
Sign up and join the conversation on Discord