Hi , i m using a custom embedding , when updating the index is there a way to intiate using the gpu ? cause ive a large data and updating the embedding in the documentstore using cpu takes really long :
Index the documents using the Llama index and the custom embedding
index = VectorStoreIndex.from_documents(documents,storage_context=storage_context,service_context=service_context)
Hi , i m using a custom embedding , when updating the index is there a way to intiate using the gpu ? cause ive a large data and updating the embedding in the documentstore using cpu takes really long : # Index the documents using the Llama index and the custom embedding
index = VectorStoreIndex.from_documents(documents,storage_context=storage_context,service_context=service_context)
hello guys ive been trying to use awsbedrock with llamaindex but its triggring this error "botocore.exceptions.NoRegionError:" You must specify a region. while when i m using boto directely its working smoothly i m following the documentation still generating the error , so it must be related to the boto package in llamaindex ? i m looking to use the chat compeletion of llamaindex with whatever bedrock is providing from claude to meta etc etc .. can someone check and see if it needs a fix or i m doing something wrong from my side
yes that one , but i want to understand something , whats the role of service_context in a llama index rag pipeline , when the tutorial set the gpt4 as service_contest.predictor, will we be using the custom embedding or open ai embedding when we are retreiving the document during the query decomposition ?