Find answers from the community

Updated 2 months ago

@Logan M I'm using vector store index

I'm using vector store index with hybrid search using qdrant and the process is running on a google cloud run, rather than have the ONNX model load in memory and redownload every time it boots up i wanted to save the model to a volume drive i mounted to the cloud run instance. Is there a way to load and save the sparse embedding generator to the volume mount (and also have it load the model from there)?
L
4 comments
I think you'd have to customize the function that loads the sparse model so that it looks in the location that you've saved the model
seems like you could just specify the cache_dir
Plain Text
sparse_fn = fastembed_sparse_encoder(cache_dir="./cache")

vector_store = QdrantVectorStore(..., sprase_doc_fn=sparse_fn, sprase_query_fn=sparse_fn)
Add a reply
Sign up and join the conversation on Discord