Does llama-index supabase vector store support in-memory vecs? I don't wanna store the supabase on the remote supabase vecs, just wanted to create the in-memory vecs on the fly.
thanks. Another question: do the in-memory vector stores behave the same as the supabase vector store? I'm not sure if the semantic search algorithm is the same among them.