Find answers from the community

Home
Members
ch7yma_v
c
ch7yma_v
Offline, last seen 3 months ago
Joined September 25, 2024
I want to build a RAG using LlamaIndex and Llama3.1 but I don't want to install and download the model every time from ollama. Is there a way to download Llama3.1 and then load it to be used with the LlamaIndex framework?
1 comment
R
Does anyone know how to use llama3 as the LLM model in llamaindex without ollama? Can we instead use Huggingface? I don't want to use ollama because I don't know where it does save the model and how to change the saving directory.
4 comments
R
To have a RAG, do we need to run the below lines every time?!

Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5") Settings.llm = Ollama(model="llama3", request_timeout=360.0)

can’t we just load llama3 and the embedding model offline from a local directory?
6 comments
R
c
L