I'm using vector store index with hybrid search using qdrant and the process is running on a google cloud run, rather than have the ONNX model load in memory and redownload every time it boots up i wanted to save the model to a volume drive i mounted to the cloud run instance. Is there a way to load and save the sparse embedding generator to the volume mount (and also have it load the model from there)?