VectorStoreIndex
as retriever has faster LLM generation as compared to using a vector database like chroma
. This is noticeable specially if you are running open-source LLM like Llama 2
. Based on this observation I am likely to drop chroma
out of the pipeline. Can someone change my mind?Chroma
is a Vector Database, used to store the vector/embeddings at a place other than local. So that you can access the embeddings from any place or any system.Chroma
, You can put them into VectorStoreIndex
and it will work the same as normal VectorStoreIndex
.