Find answers from the community

Updated 3 months ago

**How to get the completed prompt?**

How to get the completed prompt?
LlamaIndex uses a set of default prompt templates.
To get the prompts from the query engine, I do this :
Plain Text
# define prompt viewing function
def display_prompt_dict(prompts_dict):
    for k, p in prompts_dict.items():
        text_md = f"**Prompt Key**: {k}<br>" f"**Text:** <br>"
        display(Markdown(text_md))
        print(p.get_template())
        display(Markdown("<br><br>"))

prompts_dict = query_engine.get_prompts()
display_prompt_dict(prompts_dict)

which gives me this view of these prompts:
Plain Text
**Prompt Key:** response_synthesizer:text_qa_template
**Text:**

Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer: 



**Prompt Key:** response_synthesizer:refine_template
**Text:**

The original query is as follows: {query_str}
We have provided an existing answer: {existing_answer}
We have the opportunity to refine the existing answer (only if needed) with some more context below.
------------
{context_msg}
------------
Given the new context, refine the original answer to better answer the query. If the context isn't useful, return the original answer.
Refined Answer: 

=> Is it possible to get the completed prompt with context_msg & query_str completed/informed?
W
L
2 comments
You can use Arize, It will give you the full detail on final LLM call input and output values and more!

https://docs.llamaindex.ai/en/stable/module_guides/observability/observability.html#arize-phoenix
Thanks for your help. This is really very interesting as AI observability & evaluation visualization tool.
Attachment
image.png
Add a reply
Sign up and join the conversation on Discord