Find answers from the community

s
F
Y
a
P
Updated 6 months ago

QQ, I turned on logging when using a

QQ, I turned on logging when using a Huggingface embedding and I see it always connects to Huggingface, even if I already have the model cached. Looks like it grabs the tokenizer_config and config.json each time, even though I see those files in the cache folder. Any way to tell it to stop?

Plain Text
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
# loads BAAI/bge-small-en
# embed_model = HuggingFaceEmbedding()
# loads BAAI/bge-small-en-v1.5
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
L
P
8 comments
its basically just checking if anything changed. AFAIK the only way to get it to stop doing that is specifying the actual model path as the name, and not just the model name
maybe theres some huggingface env that stops that, not totally sure
One more if you have a sec...
https://docs.llamaindex.ai/en/stable/examples/vector_stores/postgres/#improving-hybrid-search-with-queryfusionretriever, the section above talks about setting hybrid_search=True...but for queryfusion it doesn't matter right?
Oh its specifically talking about the hybrid mode built into postgres
(which isn't very good imo lol)
Add a reply
Sign up and join the conversation on Discord