Find answers from the community

Updated 2 months ago

Logan M For a QA retrieval query engine

For a QA retrieval query_engine.query() method, how do I get logs or trace of all the prompts and inputs to the LLM llama index makes when generating the answer. I am using the default response mode btw,
I swear I saw this somewhere before but I am unable to find it now.
W
B
L
3 comments
Sorry. I just checked the documentation. That should do it.

Thanks a ton for the help. I will try it out.
You can also do

Plain Text
import openai
openai.log = "debug"


Or if you aren't using openai

Plain Text
import llama_index

llama_index.set_global_handler("simple")
Add a reply
Sign up and join the conversation on Discord