Find answers from the community

Updated 9 months ago

Hi People! I was wondering if anyone has

Hi People! I was wondering if anyone has tried finetuning their own model to distill GPT4? I followed the documentation to finetune it with the finetune engine ft_llm = finetune_engine.get_finetuned_model(temperature=0.3). Now this may sound very silly but I have no idea how to keep the fine tuned LLM? or how do I keep the fine tuned ReAct agent. I went through the documentation multiple times and could not find how to do it. Appreciate any help!
W
A
2 comments
can you share which doc you used?
The doc is from https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html. I just found out that I'm supposed to use from llama_index.llms import OpenAI
finetuned_model = OpenAI(model="your_finetuned_model_id", temperature=0.3)
Add a reply
Sign up and join the conversation on Discord