Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated last year
0
Follow
Instructor
Instructor
Inactive
0
Follow
D
DrSebastianK
last year
Β·
Hey! Anyone faced speed issues with custom embedding models? I use the instructor-xl model to create a vector db with llama index, but it is extremely slow. 23 vectors take like 8 minutes. Running on colab and using the HF-langchain wrapper.
L
D
3 comments
Share
Open in Discord
L
Logan M
last year
Are you using a GPU? I know that model is fairly big for running on cpu
D
DrSebastianK
last year
Oh my god, you are absolutely right! Sorry for the question, what an epic failure!
L
Logan M
last year
Haha no worries! πͺ
Add a reply
Sign up and join the conversation on Discord
Join on Discord