Find answers from the community

Home
Members
SumitPandit
S
SumitPandit
Offline, last seen 3 months ago
Joined September 25, 2024
Hi here, I am testing localLLm with Ollama, where I am using model phi3:medium/phi3.5. Now I want to restrict the model to only respond from the local database, how should I acheive it
7 comments
W
S
J
Hi Everyone, is there any example availalble on how to use Mozilla/Meta-Llama-3.1-8B-Instruct-llamafile https://huggingface.co/Mozilla/Meta-Llama-3.1-8B-Instruct-llamafile, Llamafile Instruct model. I see there is some prompt guidance. How do I use it in the LlamaIndex with Llamafile. Is there any example available.
6 comments
S
W
S
SumitPandit
·

Non-GPU

I am trying this example https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/ with ollama, now to query simple file, it's taking more than a 4 minutes to respond and complete the script.
I am using this text file https://sherlock-holm.es/stories/plain-text/advs.txt
It looks like this step is taking lot of time
Plain Text
index = VectorStoreIndex.from_documents(
    documents,
)

I am using 32 GB RAM with 4 core CPU in cloud.
Is there anyway I can speed up the process.
Also, I see the documents still use llama3, would be great if updated to 3.1
1 comment
W