The community member posted a question asking if it is possible to see the exact prompt and data being sent to the language model (LLM). Another community member suggested using the LlamaDebugHandler, which logs each input and output to the LLM, as a potential solution to view the prompt.
Hi, is it possible to see what exact prompt and data is being sent to the llm? I know one can look at the sources in the response but wondering if it’s possible to see the actual prompt? Thanks