Find answers from the community

Updated 7 months ago

Statement

At a glance

The post discusses open-source tools related to Retrieval-Augmented Generation (RAG), such as LlamaIndex, LangChain, and Haystack. The community member states that while these tools are well-known for composing RAG pipelines, they are not focused on evaluation and their training capability is underdeveloped.

In the comments, another community member agrees on the fine-tuning aspect, but notes that all three tools integrate with a lot of evaluation frameworks.

Useful resources
https://github.com/IntelLabs/RAGFoundry "There are numerous open-source tools related to the different aspects of RAG, namely inference, training and evaluation. LlamaIndex (Liu, 2022), LangChain (Chase, 2022) and Haystack (Pietschet al., 2019) are well known libraries for composing RAG pipelines; however they are not focused on evaluation and their training capability is underdeveloped." @Logan M @jerryjliu0 what could ya guys say about this statement?
L
1 comment
I would agree on finetuning. However, all 3 integrate with a lot of eval frameworks
Add a reply
Sign up and join the conversation on Discord