Find answers from the community

Updated 3 months ago

Improve hallucinations

so after loading in my external data, i found several queries tended to hallucinate a lot. The answers looked impressive, but were largely wrong. Any tips on how to define certain data as “facts” that should use a more deterministic approach (lookup rather than interpret), vs other parts that are more general language /probabilistic lookup? Is it about using different index types somehow?
L
1 comment
Yea it's kinda of a common trait with LLMs. GPT-3.5 will hallucinate the most though

There are some settings that can reduce the chances of hallucinations (lower chunk size, maybe a higher top k to get more context)

You can also retrieve the source nodes used to create the response with response.source_nodes

You can also try to evaluate the responses using some how our evaluation tools

https://gpt-index.readthedocs.io/en/latest/how_to/evaluation/evaluation.html

https://github.com/jerryjliu/llama_index/tree/main/examples/evaluation
Add a reply
Sign up and join the conversation on Discord