The community member is asking if there is a way to parallelize the process of indexing documents using Chroma DB, as the current process is only using one GPU. A comment suggests using the environment variable CUDA_VISIBLE_DEVICES to specify all available GPUs, and also provides a link to the Llama Index documentation on parallel ingestion, which may help with the parallel processing.
is there a way to make the process of indexing documents parallele ? for example here the chroma_db creation process is using only one gpu for me, I was wondering if there was an option to make it use both my gpus ?
the process is just loading documents that I have already parsed and indexing them using huggingface embeddings and saving it to a chromadb