Find answers from the community

Updated 2 months ago

Default search

For LlamaIndex's default VectorStoreIndex, how is its default search functionality different from others such as Pinecone, Qdrant, Latern etc.
I know they support metafilter search and maybe other stuff but if I were to use their barebones search, how is it different from VectorStoreIndex's default search?
L
c
3 comments
Other vector dbs might use more advanced clustering methods like HNSW, etc, that usually scale well with millions of vectors in terms of memory usage and retirval speed

The default vector store is very simple -- it's an in-memory list of vectors. When you query, we compare the query vector to every indexed vector in a pairwise cosine similarity search

This is great for getting started, or for smaller sets of data. But if your saved index starts to balloon in size to GBs, or you need something that's hosted on a server, I would look into other vector dbs
Thanks, what you mentioned sounds like other dbs help in the memory / speed department. But when it comes to the default search quality, I'm guessing others are also using pairwise cosine similarity search for the most part? πŸ€”
Yea I would expect the overall search quality between base retrieval very similar across all vector dbs

The main advantage is resource usage and other fancy features offered by dbs
Add a reply
Sign up and join the conversation on Discord