Hi everyone I am using openai gpt-3.5 as my llm for my chat engine and it is answering questions outside of the knowledge base. Is there anything i can do to either ensure a closed domain or stop the hallucination?
I have tried a few but they are all fairly similar along the lines of specifying the role of the system and then ending with "If the question does not make any sense, explain why it does not make sense. Do not generate false info or give hypothetical answers.
Do you know what information is in the index ahead of time? Might be good to add a line like "If the question does not relate to {X}, inform the user that you can't help them with that."
I also noticed in the node postprocessors there was one to set a minimum similarity score for the documents retrieved is there anything to compare the response afterthe fact