Find answers from the community

Updated 3 months ago

https://docs.llamaindex.ai/en/latest/

https://docs.llamaindex.ai/en/latest/examples/vector_stores/SupabaseVectorIndexDemo.html#

Does llama-index supabase vector store support in-memory vecs? I don't wanna store the supabase on the remote supabase vecs, just wanted to create the in-memory vecs on the fly.
L
G
4 comments
I have no idea tbh. Is that a setting on the vecs client?

Looking at the source code, if it is the client, it might need to be updated to pass in a client rather than creating one for you https://github.com/run-llama/llama_index/blob/7c27b3125d5e44e2757a8922f7462257a4e70faa/llama_index/vector_stores/supabase.py#L42

Although if you want in-memory vectors, there are a ton of options too (our default vector store, qdrant, chroma, etc.)
thanks. Another question: do the in-memory vector stores behave the same as the supabase vector store? I'm not sure if the semantic search algorithm is the same among them.
I'm sure the search is slightly different. But most implement the same algorithm as everyone else (HNSW, etc,)
I've never noticed a difference between vector stores lol
Add a reply
Sign up and join the conversation on Discord