Find answers from the community

Updated 4 months ago

Prevent Hallucinations

At a glance

The community member has a question about how to prioritize the LLM (Large Language Model) to prevent hallucinations when using the LlamaIndex. Another community member suggests that this is mainly a prompt engineering issue, and recommends modifying the text_qa_template and refine_template in the query call, providing a link to the relevant documentation. The original poster acknowledges this and says they will check and try it.

Useful resources
Hello, I've a question about how can the LLM prioritize llama index? So I can prevent hallucinations
L
N
2 comments
This is mainly a prompt engineering thing. You'll want to modify the text_qa_template and the refine_template in the query call

https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_prompts.html
Okay, i will check and try it, thx
Add a reply
Sign up and join the conversation on Discord