BAAI/bge-reranker-base
these days for local models)response = query_engine.query( "Which grad schools did the author apply for and why?", )
index = VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context) retriever = index.as_retriever(similarity_top_k=3) nodes = retriever.retrieve(question)
ValueError: shapes (13,768) and (384,) not aligned: 768 (dim 1) != 384 (dim 0) ~
BAAI/bge-reranker-base
in service context?service_context = ServiceContext.from_defaults(embed_model=OpenAIEmbedding(model=OpenAIEmbeddingModelType.TEXT_EMBED_3_LARGE)) index = VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context) rerank = SentenceTransformerRerank( model="BAAI/bge-reranker-base", top_n=3 ) query_engine = index.as_query_engine(similarity_top_k=10, node_postprocessors=[rerank]) return query_engine
ValueError: shapes (13,768) and (3072,) not aligned: 768 (dim 1) != 3072 (dim 0)
nodes=query_engine.query(question)
qdrant
with location = memory
, so I create new embeddings every time I run the app. Regarding TEXT_EMBED_3_LARGE
, yes, it did work until I've added rerank model 😄VectorStoreIndex.from_vector_store
? In the tutorial they are using: VectorStoreIndex.from_documents
fastembed
there, but I declared OpenAIEmbeddings
in service context 😄