Hey everyone, does anybody know if fine-tuning + RAG makes the performance even better? For context, we are fine tuning off of QA from professor students pairs. Don't know if usign llama index to go to a similar question and fine-tuning the model to answer like the professor would make performance even better or worse. Any insight into this would be much appreciated.