Find answers from the community

Updated 4 months ago

Finetuning

At a glance
Hey everyone, does anybody know if fine-tuning + RAG makes the performance even better? For context, we are fine tuning off of QA from professor students pairs. Don't know if usign llama index to go to a similar question and fine-tuning the model to answer like the professor would make performance even better or worse. Any insight into this would be much appreciated.
W
1 comment
It will really depend on the dataset that you will use to finetune the model.

If it is good and covers lot of ground and model is good for finetuning then it should not be a problem I guess.

https://docs.llamaindex.ai/en/stable/examples/finetuning/openai_fine_tuning.html
Add a reply
Sign up and join the conversation on Discord