Find answers from the community

Updated 9 months ago

fastembed is breaking for

fastembed is breaking for QdrantVectorStore. I did pip install on the entire project dependencies and its failing.
Attachment
Screenshot_2024-04-11_at_2.32.15_PM.png
L
l
M
17 comments
Its on optional dependency for hybrid search
as it says pip install fastembed
switched from pure transformers, in order to use a model thats actually licensed for commerical use (not so nice to have the default sparse embedding model be non-commercial)
Thx @Logan M appreciate the help, actually something is going on with 'llama-index-vector-stores-qdrant', there was a release yesterday and its breaking, so i reduced the version to 'llama-index-vector-stores-qdrant==0.1.6'. It works again.
Works fine for me
Might help if you provided more details
Interesting, ran the notebook as is and its failing, I don't want to bother you too much Logan, I'll definitely run the latest version and get back to you with more concrete details
Attachment
Screenshot_2024-04-11_at_5.23.37_PM.png
I see, the sparse model is changed to a commercial one.
even I'm hvg the similar issue, and things break on Qdrant ... did u find a fix?
ERROR:root:Error: Unexpected Response: 400 (Bad Request)
Raw response content:
b'{"status":{"error":"Wrong input: Vector params for text-sparse-new are not specified in config"},"time":0.000474557}'
@Maverick How did you replicate this error? I've not had any issues, but happy to debug if I have a reproducable case
not sure how it started when I had updated my packages (and I happened to delete my old venv in that process) 😦
the only solution was to revert back to my prev stable packages on llama-index & qdrant-client and things r fine
could tweak them ... but didn't hv the patience then
but just curious where fastembed came from ... after the update ... any thoughts? (I never had it previously)
fastembed was introduced because the old default sparse embedding model was non-commercial (before it was using a pure pytorch/transformers package)
I haven't had any issues, and no one has provided a reproducible case, so its pretty hard to debug or help πŸ˜…
I'm currently on llama-index (0.10.15) & qdrant-client (1.7.3) ... what's the easiest way to update into fastembed w/o getting errors as indicated above ... it'd be of great help @Logan M
just install fastembed and transformers? pip install fastembed transformers ?

Any indexes you've already created in qdrant using the old approach will continue to use the old (non-commerical) splade model with transformers for backawards compatibility
Add a reply
Sign up and join the conversation on Discord