Find answers from the community

Updated 7 months ago

Is it possible to implement RAG by fine-

Is it possible to implement RAG by fine-tuning things like the llm local model or gpt?

Or is this inefficient?
T
๊ถŒ
7 comments
Depends on how you're implementing it. For most use-cases it isn't necessary. It's covered here: https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning/
thank you By the way, have you ever used Langchain?

Actually, I'm going for an interview at a small company next week, and they say they use Langchain. Are there any knowledge I need to know?
There are quite a bit of similarities. I'd read the composition section, it covers the concepts: https://python.langchain.com/docs/modules/composition/
Okay thank you, are you AI? You answer like an assistant kkk
I'm not ๐Ÿ˜…
Oh and the company says they will also use RAG. I know how RAG works, but do I need to know more details?

For example, how does the assistant respond when you send a query? In other words, what algorithm does the assistant use to answer?
.
Sorry, I'm currently a front-end developer and I've been trying a lot lately because I want to change jobs. That's why I'm so nervous and worried that's why I'm asking these questions. sorry
Add a reply
Sign up and join the conversation on Discord