Hi everyone, I was wondering whether there are specific guidelines to reduce out-of-context questions generated by the subquestion query engine as I find that those have been encouraging hallucinated responses by the LLMs. Additionally, is there also a way to reduce hallucinated responses to the generated questions (e.g. editing the default query template) similar to RAG-style? Thanks!
For reducing hallucinations in generated questions, I would maybe either a) write different tool descriptions or b) modifying the underlying question generator prompt template
Thanks for the response, Logan! I've given those a shot and still experience a fair bit of hallucinations in the questions/answers. Will probably still try to tweak the various prompt templates more to try and find a sweet spot. It may be that a non-OpenAI LLM struggles a little more with asking good questions.