Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")
Settings.llm = Ollama(model="llama3", request_timeout=360.0)
~/.cache/huggingface/hub
by default. if you wanna use a different local directory then update the HUGGINGFACE_HUB_CACHE
environment variable.ollama pull
you can run OLLAMA_MODELS="/home/user/models" ollama pull llama3
it'll store the model in /home/user/models
instead of the default location