Find answers from the community

Updated 3 months ago

Async

Hello. Does anyone know if I can parallelize the from_documents function. The embedding process is taking a long time and I was wondering if it can be accomplished in parallel. My code currently is
Plain Text
import chromadb
from llama_index.vector_stores import ChromaVectorStore

db = chromadb.PersistentClient(path="./polygon")
collection = db.get_or_create_collection("default")
vector_store = ChromaVectorStore(chroma_collection=collection)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
index = index.from_documents(documents, show_progress=True)
query_engine = index.as_query_engine()

I want to speed up the ingestion of documents
Add a reply
Sign up and join the conversation on Discord