Find answers from the community

Updated 10 months ago

was looking to finetune an adapter as

was looking to finetune an adapter as detailed here, https://docs.llamaindex.ai/en/stable/examples/finetuning/embeddings/finetune_embedding_adapter.html but it seems that hit rate is lower than if we were to use ada-2 model. doesn't higher hit rate mean that model is better at retrieving the right documents?
L
s
14 comments
I think its show the ada is better than bge, but bge can be slightly better with fine-tuning (for this specific dataset)
(tbh I just quickly skimmed it lol)
what metrics show that bge can be slightly better in this article? both hit rate and mrr is lower. (0.803797 vs 0.87088, 0.667426 vs 0.728840 at best)
don't understand the point of this article if it yields poorer metrics
or am i misunderstanding something?
bge has hit rate of 0.78, then with the fine-tuned adaptor its 0.79
so small increase when finetuning an adaptor for bge
but i was under the impression that it would be better than ada
is it supposed to be better if we fine-tune with some other text embedding models?
What gave you that impression? Especially since it's bge-small, its quite impressive performance compared to ada imo

People want local embeddings, and they want them to be good. This is one option to improve performance of local embeddings πŸ€”
i see. i was under the impression that this was supposed to tell us how to fine-tune an embedding model that's better than ada.
thank you for your answer.
Add a reply
Sign up and join the conversation on Discord