Initial response: ...
Refine context: (snippet, not full text)
Refined response: Return the original answer. The provided context does not relate to ...
DEBUG:root:> Refined response:
Refined answer: ....
response = response.strip()However, I'm unsure of how to handle newlines if they appear in the middle of the generated responses, which does occasionally happen.
from gpt_index.logger import LlamaLogger
and add it during the index.query
call. Currently only works during create/refine response synthesis (which is what you wanted to see)