Find answers from the community

Updated 2 years ago

Hi everyone I was wondering whether

At a glance
Hi everyone, I was wondering whether there are specific guidelines to reduce out-of-context questions generated by the subquestion query engine as I find that those have been encouraging hallucinated responses by the LLMs. Additionally, is there also a way to reduce hallucinated responses to the generated questions (e.g. editing the default query template) similar to RAG-style? Thanks!
L
B
2 comments
For the underlying questions, ya you could edit the templates for the underlying query engine tools. You can see how to edit query engine templates here: https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/prompts.html#modules

For reducing hallucinations in generated questions, I would maybe either a) write different tool descriptions or b) modifying the underlying question generator prompt template

option b is it's own can of worms, but if you want to go that route, the SubQuestionQueryEngine accepts a question_gen parameter, and the OpenAIQuestionGenerator has a template that you can modify https://github.com/jerryjliu/llama_index/blob/5671177d480ce178a278856bc27c785b69ceed57/llama_index/question_gen/openai_generator.py#L56
Thanks for the response, Logan! I've given those a shot and still experience a fair bit of hallucinations in the questions/answers. Will probably still try to tweak the various prompt templates more to try and find a sweet spot. It may be that a non-OpenAI LLM struggles a little more with asking good questions.
Add a reply
Sign up and join the conversation on Discord