Find answers from the community

Updated 4 months ago

**How to get the completed prompt?**

At a glance

The community member is trying to understand how to get the completed prompt from the LlamaIndex query engine. They provide code to display the default prompt templates, which show two prompts: "response_synthesizer:text_qa_template" and "response_synthesizer:refine_template". The community member is asking if it is possible to get the completed prompt with the context_msg and query_str values filled in.

In the comments, another community member suggests using Arize, an AI observability and evaluation visualization tool, to get more details on the final LLM call input and output values.

Useful resources
How to get the completed prompt?
LlamaIndex uses a set of default prompt templates.
To get the prompts from the query engine, I do this :
Plain Text
# define prompt viewing function
def display_prompt_dict(prompts_dict):
    for k, p in prompts_dict.items():
        text_md = f"**Prompt Key**: {k}<br>" f"**Text:** <br>"
        display(Markdown(text_md))
        print(p.get_template())
        display(Markdown("<br><br>"))

prompts_dict = query_engine.get_prompts()
display_prompt_dict(prompts_dict)

which gives me this view of these prompts:
Plain Text
**Prompt Key:** response_synthesizer:text_qa_template
**Text:**

Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer: 



**Prompt Key:** response_synthesizer:refine_template
**Text:**

The original query is as follows: {query_str}
We have provided an existing answer: {existing_answer}
We have the opportunity to refine the existing answer (only if needed) with some more context below.
------------
{context_msg}
------------
Given the new context, refine the original answer to better answer the query. If the context isn't useful, return the original answer.
Refined Answer: 

=> Is it possible to get the completed prompt with context_msg & query_str completed/informed?
W
L
2 comments
You can use Arize, It will give you the full detail on final LLM call input and output values and more!

https://docs.llamaindex.ai/en/stable/module_guides/observability/observability.html#arize-phoenix
Thanks for your help. This is really very interesting as AI observability & evaluation visualization tool.
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord