Find answers from the community

Updated 3 months ago

Hello, I was using Postgres with

Hello, I was using Postgres with pgvector for hybrid search with QueryFusionRetriever version for even better results.
Now I'm trying to transition to pgvector.rs for faster performance but even though their documentation states there is dense and sparse search options I can't seem to find those options in llama index implementation of Vector store.
Is there a way to use those options with llama index especially with QueryFusionRetriever if possible
L
D
5 comments
Seems like they aren't implemented in the integration source code πŸ‘€ Someone would have to add it
Oh ok, I thought so but just wanted to check, I guess I will use pgvecto.rs than for easier tasks where speed would be more crucial than accuracy, at least for now.
And how difficult is it to implement something like that ofcourse roughly estimated, I might consider trying it
Assuming you have a decent understanding of pgvector.rs itself already, probably only a days work or so

Adding the sparse embeddings got a bit easier recently since we added a new SparseEmbeddingModel base class! There's only one official integration at the moment (fastembed) -- but yea, the idea would be to accept that in the constructor, and use that to generate sparse embeddings

I've been meaning to add this to some other existing vector stores, but haven't quite yet πŸ˜…
Ok cool thanks
Add a reply
Sign up and join the conversation on Discord