Find answers from the community

Updated last year

Model path

At a glance

The post discusses automating the process of using a model, where the community member wants to use the model path if it exists, or the model URL if it doesn't. The comments suggest manually checking the path first, and provide options to change the cache directory. There is a discussion around the compatibility of llama-index with open-source LLMs, with the community members explaining that the framework name is not related to the models it uses, and that OpenAI was the default initially. The community members also discuss fine-tuning options, noting that the fine-tuning module in the documentation is specific to OpenAI, and suggesting an alternative approach using Gradient AI.

Useful resources
yes I know the 'model_path'. I wanted to automatize the process, if the model exists then use model path, if not, the model_url
L
E
M
14 comments
Just check the path manually I think first, to decide on passing in the path or url
it should be by default no?
Attachment
image.png
oh I see, we're trowing an error.
Yea you can follow with this
ok, I can check the /tmp/llama_index/models but wanted to save it on my own directory,
You can change the cache directory by setting $LLAMA_INDEX_CACHE_DIR in your environment variables
WIll it work with open source llm models: https://gpt-index.readthedocs.io/en/latest/examples/finetuning/knowledge/finetune_retrieval_aug.html?
I don't have any information about the llamaindex's history or its connections with OpenAI, but it doesn't make sense to me that something called 'llama' wouldn't be compatible with other open-source LLMs like llama2 or similar models. The same applies to the following code snippet:I don't have any information about the llamaindex's history or its connections with OpenAI, but it doesn't make sense to me that something called 'llama' wouldn't be compatible with other open-source LLMs like llama2 or similar models.

The same applies to the following code snippet: agent = create_pandas_dataframe_agent(OpenAI(temperature=0, model_name='gpt-4'), [df], handle_parsing_errors=True, verbose=True)
I've noticed confusion on GitHub, and someone else asked me similar questions about evaluators. While evaluators work with other LLMs, I'd like to clarify why OpenAI is used in the examples. We wasted a lot of time trying to determine if it would work with other LLMs.
llama-index arrived a few weeks/days before the meta llama models πŸ˜…

the framework name itself has no relation to the models which the framework uses
so OpenAI always was the default one, since came firstly, and provide their API to anyone use
curiosity: before llama-index the name was gpt-index
Everyone wanted to spit like a llama πŸ™‚ Thank you on explanation. And what do you think about finetunning with RAG (the link above)? Will it work with llama2, mistral or some other os model?
The concepts still the same, unfortunately you will need to find another way to fine-tune your llama, since this module on this doc is only for openai
Add a reply
Sign up and join the conversation on Discord