Find answers from the community

Updated 4 months ago

model path

At a glance

The community member is interested in using a model from the local directory, specifically the "mistral" model. According to the source provided, the model can be used from a URL or a local path. The community member has a concern that if they provide a model path, the model may not be recognized and downloaded. They also want to know how to change the path of the model URL from the default "/tmp/llama_index/models/" to a desired directory.

In the comments, another community member suggests that the community member can pass a model_path argument to LlamaCPP, which will solve both problems - the model will not be downloaded and will be loaded from the specified path.

Useful resources
I would like to use model from the local directory, mistral for instance, According to this source: https://gpt-index.readthedocs.io/en/latest/examples/llm/llama_2_llama_cpp.html I can use it from url or path. LLamaCPP download the model in /tmp/llama_index/models/. If I put model path, will it be recognized before so the model doesn't have to be downloaded? Secondly, how to change the path of the model url from tmp to desired directory?

llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
E
1 comment
You can pass a model_path argument to LlamaCPP, then will solve your both problems, will not be downloaded and will get from your path
Add a reply
Sign up and join the conversation on Discord