Find answers from the community

Updated last year

It’s a big production project so big

It’s a big production project, so big changes are not feasible. I didn’t know that mongo has vector db , interesting
t
i
24 comments
I don't mean use mongo as a vector db, I meant for the storage of your Llama Index Documents or TextNodes. You said changes aren't feasible, but it doesn't sound like you have a production pipeline for generating your documents, so I meant, create a pipeline that creates and stores the Documents in mongo.
source material -> mongo -> vectordb
Qdrant is faster btw if scale is an issue
ah i didn't read fully
what is Qdrant @isaackogan? vectorDB?
Qdrant is a popular vectordb
quite a nice one too
ah... there's so many to try. I've only fiddled with chroma and pinecone
it stores the text chunks in the metadata so you don't even need to do that mongo storage
unfortunately you do need to wrap requests in threadpoolexecutor as it is synchronous
but so is everything :/
lol, not sure what that means 😛
@theta basically, in python you can execute things in series, and in parallel
series = synchronous
parallel = asynchronous
one method of achieving asynchronous code is to use threads
the idea behind asynchronous code is that if you want to respond to 20 queries at once, you want to be able to respond to them simultaneously rather than 1 after the other
haven't even started learning that stuff yet
ah, well, good news is there's always plenty to learn 😄
yeah I get the description
@isaackogan I like to store the LLama Index documents separately from the vectordb so that I can easily make changes at the document level and then either rebuild the vectordb or move to a new platform, I just like having access to the docs outside of the complexity of the vectordb
That makes sense
We have an external doc store that refreshes our index so we technically do what you’re doing just separated out
Add a reply
Sign up and join the conversation on Discord