Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
😞
😐
😃
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
intfloat/e5-mistral-7b-instruct · Huggin...
intfloat/e5-mistral-7b-instruct · Huggin...
Inactive
0
Follow
D
DangFutures
12 months ago
·
was looking at the mtbe benchmark and saw
https://huggingface.co/intfloat/e5-mistral-7b-instruct
as number one... its an llm
https://huggingface.co/spaces/mteb/leaderboard
Can we use the fine-repo to fine-tune mistral?
L
D
3 comments
Share
Open in Discord
L
Logan M
12 months ago
lol yeaaa... I would not use an LLM as an embedding model.
We don't have any support right now for fine-tuning LLMs to be embedding models
L
Logan M
12 months ago
7B parameters and it just barely beats other models lol
D
DangFutures
12 months ago
good idea
Add a reply
Sign up and join the conversation on Discord
Join on Discord