Find answers from the community

Updated 11 months ago

Hi, everyone

At a glance

The community member has a question about why the technology called RAG is used when fine-tuning exists. A comment from another community member explains that fine-tuning is inefficient for question-answering use cases as it requires a lot of work to create datasets and has poor knowledge retention. The comment suggests that RAG can be used to augment fine-tuning, but fine-tuning alone does not yield good results. The comment states that RAG is more accurate, cheaper, faster, and more flexible than just fine-tuning.

Hi, everyone
im While listening to the lecture, a question arose. Now the lecture is about fine tuning. But Why is a technology called RAG used when fine tuning exists?
T
1 comment
Fine-tuning is inefficient for Q/A use-cases because it requires a lot of work to make datasets and the knowledge retention is very poor.

It can be used for augmenting RAG but just fine-tuning itself doesn't yield good results.

RAG is more accurate, cheaper, faster and flexible.
Add a reply
Sign up and join the conversation on Discord