Find answers from the community

Updated last year

I notice that using `VectorStoreIndex`

I notice that using VectorStoreIndex as retriever has faster LLM generation as compared to using a vector database like chroma. This is noticeable specially if you are running open-source LLM like Llama 2. Based on this observation I am likely to drop chroma out of the pipeline. Can someone change my mind?
W
g
2 comments
Chroma is a Vector Database, used to store the vector/embeddings at a place other than local. So that you can access the embeddings from any place or any system.

Once you fetch the vectors from Chroma, You can put them into VectorStoreIndex and it will work the same as normal VectorStoreIndex.
Thanks! I was thinking about this. I appreciate your wonderful insight!
Add a reply
Sign up and join the conversation on Discord