For a QA retrieval query_engine.query() method, how do I get logs or trace of all the prompts and inputs to the LLM llama index makes when generating the answer. I am using the default response mode btw, I swear I saw this somewhere before but I am unable to find it now.