Find answers from the community

Updated 4 months ago

https://docs.llamaindex.ai/en/stable/

https://docs.llamaindex.ai/en/stable/examples/vector_stores/qdrant_hybrid/ in this,

Plain Text
query_engine = index.as_query_engine(
    similarity_top_k=2, sparse_top_k=12, vector_store_query_mode="hybrid"
)

what kind of hybrid retrieval is being used ?
L
p
6 comments
Depends on the vector store
And if the vector store you are using supports hybrid
This will run sparse vector generation locally using the "prithvida/Splade_PP_en_v1" using fastembed, in addition to generating dense vectors with OpenAI.
one more question will i require GPU also for sparse vector generation locally
You don't need a gpu, but it will go faster if you do have one
Add a reply
Sign up and join the conversation on Discord