Find answers from the community

Updated last year

Hello,

Hello,
Will there be a way to implement the Rags with local LLM (Hugging face or llama cpp) ?
L
T
5 comments
Local LLMs are not anywhere near smart enough sadly, unless maybe you have resources to run a 70B model
I see, is it a limitation about the agents as well ?
Yea, open-source LLMs need to use the ReAct agent, and in my experience they do not perform well. I feel it takes a lot of prompt engineering for each LLM, but the prompts for the react agent are not easy to modify (for now)
I see, so for conversation chatbot using agent on small LLM such as mistral isn’t the best solution right ? I’m struggling to get langchain agents working with mistral
Yea I agree. Mistral works great in a query engine, but not so much for being an agent and making decisions
Add a reply
Sign up and join the conversation on Discord