Find answers from the community

Updated 4 months ago

intfloat/e5-mistral-7b-instruct · Huggin...

At a glance

The community member was looking at the MTBE benchmark and saw that an LLM called Mistral, with 7B parameters, was ranked number one. They asked if they could use the fine-tuning repository to fine-tune Mistral. In the comments, another community member advised against using an LLM as an embedding model, stating that there is currently no support for fine-tuning LLMs for this purpose. Another comment noted that the 7B parameter Mistral model only barely beats other models.

Useful resources
was looking at the mtbe benchmark and saw https://huggingface.co/intfloat/e5-mistral-7b-instruct
as number one... its an llm
https://huggingface.co/spaces/mteb/leaderboard
Can we use the fine-repo to fine-tune mistral?
L
D
3 comments
lol yeaaa... I would not use an LLM as an embedding model.

We don't have any support right now for fine-tuning LLMs to be embedding models
7B parameters and it just barely beats other models lol
Add a reply
Sign up and join the conversation on Discord