chroma_client = chromadb.PersistentClient(path="chroma") chroma_collection = chroma_client.get_or_create_collection("cardcom_collection") vector_store = ChromaVectorStore(chroma_collection=chroma_collection) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context, service_context=service_context)
VectorStoreIndex
. It's already imported, my whole code is working before. My llama-index version is 0.8.45.post1
can someone help? This is most likely a package issue since I got my whole code already working yesterday.VectorStoreIndex
as retriever has faster LLM generation as compared to using a vector database like chroma
. This is noticeable specially if you are running open-source LLM like Llama 2
. Based on this observation I am likely to drop chroma
out of the pipeline. Can someone change my mind?