For LlamaIndex's default VectorStoreIndex, how is its defaultsearch functionality different from others such as Pinecone, Qdrant, Latern etc. I know they support metafilter search and maybe other stuff but if I were to use their barebones search, how is it different from VectorStoreIndex's default search?
Other vector dbs might use more advanced clustering methods like HNSW, etc, that usually scale well with millions of vectors in terms of memory usage and retirval speed
The default vector store is very simple -- it's an in-memory list of vectors. When you query, we compare the query vector to every indexed vector in a pairwise cosine similarity search
This is great for getting started, or for smaller sets of data. But if your saved index starts to balloon in size to GBs, or you need something that's hosted on a server, I would look into other vector dbs
Thanks, what you mentioned sounds like other dbs help in the memory / speed department. But when it comes to the default search quality, I'm guessing others are also using pairwise cosine similarity search for the most part? π€