We don't actually have an integration using postgresML yet, but would be super open to a PR on this π
In a large majority of vector stores, llama index uses the vector store to store embeddings as well as the text! This allows indexes that use these vector stores to not have to be persisted to disk, since the vector store is persisting everything, which is actually super handy.
Our current integrations are over here:
https://gpt-index.readthedocs.io/en/latest/how_to/integrations/vector_stores.html