Find answers from the community

s
F
Y
a
P
Updated 7 months ago

I have built a Q&A RAG application using

I have built a Q&A RAG application using llamaindex that answers questions based on documents present in a folder. Currently, it always answers questions even if the question is not related to any of the documents present in the folder. In most of these cases, it will make up an answer/hallucinate. Any tips to control this behavior? For example, a user simply typed hello and the app returned some random answer.
T
A
6 comments
There are a few ways: you can implement a similarity threshold to make sure that irrelevant documents wont get returned: https://docs.llamaindex.ai/en/stable/module_guides/querying/node_postprocessors/node_postprocessors/?h=similaritypost#similaritypostprocessor

Also which LLM are you using? You can also modify your prompt to help control this
@Teemu , I am using llama-2 7B locally. I'll give that a try. What are some other ways to handle this scenario?
@Logan M @WhiteFang_Jr Do you have any inputs on this?
With such a small model it will be very prone to hallucinating. Not sure how much prompting would improve it but it's worth a try.

I'd maybe look at the similarity threshold so that those irrelevant documents wont even be included which confuses the model
If you set the similarity threshold correctly, saying 'hello' shouldn't be returning any results
Add a reply
Sign up and join the conversation on Discord