I am using an embedding model with a dimension of 1024 via BedrockEmbedding, and have set the embed_model in both Settings and the VectorStoreIndex.from_vector_store() method to be this embedding model, yet for some reason it is still expecting the OpenAI embedding model. Am I missing something or any advice on how to debug?
It seems like st.session_state.embed_model does not match the embed model that was used to create this index π
The full traceback would probably show some error in the qdrant vector store, because you are querying with a different embedding model than was used to create the index
is there an embedding dimension parameter set somewhere to a default value of 1536 that is not being overriden in my case because I forgot to set something?
So strange because I really do only have access to cohere's v3 model which has a dimension of 1024. I know the defaults in llamaindex are usually OpenAI so I just assumed it was dropping to the default somewhere and assuming a dimension of 1536 from one of OpenAI's embedding models. Will go ahead and re-embed everything and see what happens.