Find answers from the community

Updated last year

Instructor

Hey! Anyone faced speed issues with custom embedding models? I use the instructor-xl model to create a vector db with llama index, but it is extremely slow. 23 vectors take like 8 minutes. Running on colab and using the HF-langchain wrapper.
L
D
3 comments
Are you using a GPU? I know that model is fairly big for running on cpu
Oh my god, you are absolutely right! Sorry for the question, what an epic failure!
Haha no worries! πŸ’ͺ
Add a reply
Sign up and join the conversation on Discord