The community member is asking how to see the formatted prompt that is sent to the Large Language Model (LLM). They can see the prompts used by calling query_engine.get_prompts(), but not the actual formatted prompts sent to the LLM.
In the comments, other community members suggest using observability tools like Arize Phoenix or the Simple LLM Inputs/Outputs tool to check on this. However, there is no explicitly marked answer in the post and comments.
How can I see the formatted prompt which is sent to the LLM? I can see what kind of prompts are used by using query_engine.get_prompts() but not the formatted ones which are sent to LLM.