Find answers from the community

Updated 3 months ago

is there a place to manipulate the cache

is there a place to manipulate the cache settings to prevent llamaindex from checking "upstream" to find a newer embedding model version?
L
t
5 comments
if you pass in the full path to the model weights, it won't check upstream (it seems to be default huggingface behaviour if you just give the model name/ID, haven't dug too much into that)
@Logan M how do you do that for the embedding model?
yessir

Plain Text
from llama_index.embeddings import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding(model_name="<path/to/model/folder>", ...)
To get the model dir, the easiest way is to clone the repo from huggingface

For example, click clone repository here for instructions
Attachment
image.png
Basically its just

Plain Text
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/BAAI/bge-large-en
Add a reply
Sign up and join the conversation on Discord