Find answers from the community

Updated 6 months ago

is there a place to manipulate the cache

At a glance

The post asks if there is a way to prevent llamaindex from checking "upstream" to find a newer embedding model version. The community members suggest that passing the full path to the model weights will prevent this behavior, as it seems to be the default Hugging Face behavior when only providing the model name/ID. To get the model directory, the community members recommend cloning the repository from Hugging Face.

Useful resources
is there a place to manipulate the cache settings to prevent llamaindex from checking "upstream" to find a newer embedding model version?
L
t
5 comments
if you pass in the full path to the model weights, it won't check upstream (it seems to be default huggingface behaviour if you just give the model name/ID, haven't dug too much into that)
@Logan M how do you do that for the embedding model?
yessir

Plain Text
from llama_index.embeddings import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding(model_name="<path/to/model/folder>", ...)
To get the model dir, the easiest way is to clone the repo from huggingface

For example, click clone repository here for instructions
Attachment
image.png
Basically its just

Plain Text
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/BAAI/bge-large-en
Add a reply
Sign up and join the conversation on Discord