Find answers from the community

s
F
Y
a
P
Updated 2 years ago

Any experience with using Llama index in

Any experience with using Llama index in production? I'd like people to upload their own data then query them.

My concern is that loading all those data in memory is infeasible.
Is there an on-disk-only setting that can be set to prevent the process for loading all indexes in memory and using too much?
w
b
j
5 comments
I have little AI/ML experience, I've read about how LlamaIndex works under the hood. But I'm unaware of whether it is possible to use a SQL database (Postgres)/Elasticsearch as a backend for storing + querying indexes.

I'm building a production-scale web server that'll parse multiple files, each belonging to a different customer, and query them. So having all of them stored in memory during querying is scary.
I'm using it quite a bit here: agent-hq.io

Right now I'm storing the indices on disk and loading them at query time.

But weviate, pinecone, or pgsql with the pgvector extension are all good options too.
@bbornsztein during query time, does Llama iterate over the indexes inside pgsql? or does it still load the indexes in memory? how efficient on that?

I'm unaware of the internals and giving your insight can really shorten my experimentation time here!
sorry, I haven't used the pgvector extension myself, so I can't help much. @jerryjliu0 is pgvector supported as a vector store yet?
nope ๐Ÿ˜ฎ haven't ahd the chance to add it
Add a reply
Sign up and join the conversation on Discord