Find answers from the community

Updated 2 months ago

I am building an index with 300k

I am building an index with 300k documents, is it possible to see the progress of building? like tqdm metric?
B
E
L
16 comments
moreover, an ideas for optimization the running time? i use model_name=BAAI/bge-small-en can it run on GPU?
Plain Text
index = VectorStoreIndex.from_documents(documents, show_progress=True)
i don't find in the docs how to improve the embeddings speed
i am using the local embedding model - Could not load OpenAIEmbedding. Using HuggingFaceBgeEmbeddings with model_name=BAAI/bge-small-en. If you intended to use OpenAI, please check your OPENAI_API_KEY.
i mean, how can I run the BAAI/bge-small-en faster? what if I add a GPU to my mahcine
maybe @Logan M can jump in here

You can try with a better hardware, but the embedding it self doesn't have other solutions to improve speed, mainly this Langchain one, with OpenAIEmbeddings you would increase the batch size
GPU will definitely help here
I think if you have cuda installed, it will automatically run on gpu too
oh nice, i'll try on cuda gpu
Add a reply
Sign up and join the conversation on Discord