The community member has a question about why the technology called RAG is used when fine-tuning exists. A comment from another community member explains that fine-tuning is inefficient for question-answering use cases as it requires a lot of work to create datasets and has poor knowledge retention. The comment suggests that RAG can be used to augment fine-tuning, but fine-tuning alone does not yield good results. The comment states that RAG is more accurate, cheaper, faster, and more flexible than just fine-tuning.
Hi, everyone im While listening to the lecture, a question arose. Now the lecture is about fine tuning. But Why is a technology called RAG used when fine tuning exists?