Find answers from the community

Updated 4 months ago

how do I add a custom prompt to my query engine? I want to pass in a custom prompt.Let's

At a glance

The community members are discussing how to add a custom prompt to a query engine in LlamaIndex. The key steps are:

1. Define a custom prompt string with placeholders for context and query.

2. Create a PromptTemplate object from the custom prompt string.

3. Update the prompts of the query engine using the update_prompts method, passing in the custom prompt.

4. When making a query, provide the actual context and query strings to format the prompt.

The community members also discuss where the context_str comes from and how to add additional variables to the prompt.

There is no explicitly marked answer, but the community members provide detailed steps and examples on how to customize the prompt used by the query engine.

how do I add a custom prompt to my query engine? I want to pass in a custom prompt.

Let's say that this is my prompt, how do I pass in the context_str to this?:

custom_prompt_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the query.\n"
"Query: {query_str}\n"
"Answer: "
)
k
A
27 comments
Please stand by @Antoine as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To add a custom prompt to your query engine in LlamaIndex, you can use the update_prompts function. This function allows you to customize prompts on any module that implements get_prompts. You just need to pass in argument values with the keys equal to the keys you see in the prompt dictionary obtained through get_prompts.

Here's how you can do it:

Plain Text
from llama_index.core import PromptTemplate

# Define your custom prompt
custom_prompt_str = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, "
    "answer the query.\n"
    "Query: {query_str}\n"
    "Answer: "
)

# Create a PromptTemplate with your custom prompt
custom_prompt = PromptTemplate(custom_prompt_str)

# Update the prompts of your query engine
query_engine.update_prompts(
    {"response_synthesizer:text_qa_template": custom_prompt}
)


In this code, query_engine is your instance of the query engine. The key "response_synthesizer:text_qa_template" is the key for the prompt you want to update. This key might be different depending on the specific prompt you want to customize.

After running this code, your query engine will use your custom prompt when synthesizing responses.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai where do I add the context_str in that template? Where does it even come from? I want to add something to it.
Let me check the knowledge base for you. One moment! :books:
----------------------
The context_str is a placeholder in your prompt template that gets replaced with the actual context when you format the prompt. It's a variable that you pass in when you call the format method on your PromptTemplate instance.

Here's an example of how you can add a context string to your prompt:

Plain Text
# Define your context string
context_str = "This is the context for my query."

# Define your query string
query_str = "This is my query."

# Format the prompt with your context and query strings
formatted_prompt = custom_prompt.format(context_str=context_str, query_str=query_str)


In this code, context_str is the actual context that you want to use for your query. It could be any string that provides context for the query you're making. The query_str is the actual query that you want to make.

After running this code, formatted_prompt will be a string that includes your context and query in the places where {{context_str}} and {{query_str}} were in your original prompt template.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai originally what is the context_str? Let's say i dont edit it, where does the context str come from as is?
Let me check the knowledge base for you. One moment! :books:
----------------------
The context_str is typically the information that provides context for the query you're making. It's usually a string that contains the information that the model needs to answer the query.

In the context of LlamaIndex, the context_str is often the content retrieved from the knowledge base that is relevant to the query. For example, if you're using a retrieval-based model, the context_str might be the documents or passages that the model retrieves based on the query.

Here's an example from the LlamaIndex documentation () where the context_str is the content of the retrieved nodes:

Plain Text
# Retrieve nodes relevant to the query
retrieved_nodes = vector_retriever.retrieve(query_str)

# The context string is the content of the retrieved nodes
context_str = "\n\n".join([n.get_content() for n in retrieved_nodes])


In this code, vector_retriever is an instance of a vector retriever, and query_str is the query that you're making. The retrieve method retrieves nodes from the knowledge base that are relevant to the query, and the get_content method gets the content of each node. The context_str is then the content of all the retrieved nodes joined together with line breaks in between.

If you don't edit the context_str, it will be whatever content the model retrieves from the knowledge base based on the query. If you want to add something to it, you can simply concatenate your additional information to the context_str before passing it to the format method.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai let's say I want to add a new variable in my prompt like this, how do I add the contents of that variable?

"""You are an expert Q&A system that is trusted around the world.
Your description as an agent is as follows:
{chatbot_description}
Always answer the query using the provided context information, and not prior knowledge.
Some rules to follow:
Never directly reference the given context in your answer.
Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.

Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer:
Let me check the knowledge base for you. One moment! :books:
----------------------
To add a new variable to your prompt, you can simply include it in the string as you've done with {chatbot_description}. You then need to provide a value for this variable when you format the prompt.

Here's how you can do it:

Plain Text
from llama_index.core import PromptTemplate

# Define your custom prompt with the new variable
custom_prompt_str = (
    "You are an expert Q&A system that is trusted around the world.\n"
    "Your description as an agent is as follows: \n"
    "{chatbot_description}\n"
    "Always answer the query using the provided context information, and not prior knowledge.\n"
    "Some rules to follow:\n"
    "Never directly reference the given context in your answer.\n"
    "Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n"
    "\n"
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, answer the query.\n"
    "Query: {query_str}\n"
    "Answer: "
)

# Create a PromptTemplate with your custom prompt
custom_prompt = PromptTemplate(custom_prompt_str)

# Define your chatbot description, context string, and query string
chatbot_description = "I am an AI chatbot designed to answer complex queries."
context_str = "This is the context for my query."
query_str = "This is my query."

# Format the prompt with your chatbot description, context string, and query string
formatted_prompt = custom_prompt.format(chatbot_description=chatbot_description, context_str=context_str, query_str=query_str)


In this code, chatbot_description is the description of your chatbot that you want to include in the prompt. It could be any string that describes your chatbot.
After running this code, formatted_prompt will be a string that includes your chatbot description, context, and query in the places where {chatbot_description}, {context_str}, and {query_str} were in your original prompt template.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai after making the formatted prompt, how do I update the prompt and then query it?
Let me check the knowledge base for you. One moment! :books:
----------------------
After creating your formatted prompt, you can update the prompt in your query engine using the update_prompts function. Then, you can use the query function of your query engine to make a query. Here's how you can do it:

```python
from llama_index.core import PromptTemplate

Define your custom prompt with the new variable

custom_prompt_str = (
"You are an expert Q&A system that is trusted around the world.\n"
"Your description as an agent is as follows: \n"
"{{chatbot_description}}\n"
"Always answer the query using the provided context information, and not prior knowledge.\n"
"Some rules to follow:\n"
"Never directly reference the given context in your answer.\n"
"Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n"
"\n"
"Context information is below.\n"
"---------------------\n"
"{{context_str}}\n"
"---------------------\n"
"Given the context information and not prior knowledge, answer the query.\n"
"Query: {{query_str}}\n"
"Answer: "
)

Create a PromptTemplate with your custom prompt

custom_prompt = PromptTemplate(custom_prompt_str)

Update the prompts of your query engine

query_engine.update_prompts(
{"response_synthesizer:text_qa_template": custom_prompt}
)

Define your chatbot description, context string, and query string

chatbot_description = "I am an AI chatbot designed to answer complex queries."
context_str = "This is the context for my query."
query_str = "This is my query."

Format the prompt with your chatbot description, context string, and query string

formatted_prompt = custom_prompt.format(chatbot_description=chatbot_description, context_str=context_str, query_str=query_str)

Make a query with your query engine

response = query_engine.query(query_str)

Print the response

print(str(response))
`` In this code, query_engine is your instance of the query engine. The key "response_synthesizer:text_qa_template"` is the key for the prompt you want to update. This key might be different depending on the specific prompt you want to customize.

After running this code, your query engine will use your custom prompt when synthesizing responses, and you will see the response to your query printed out.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai but you never added your formatted prompt into the prompts of the query engine there? Where is it added?
Let me check the knowledge base for you. One moment! :books:
----------------------
I apologize for the confusion. The formatted_prompt is not directly added to the query engine. Instead, the custom_prompt (which is a PromptTemplate object) is added to the query engine using the update_prompts method. The formatted_prompt is just an example of how the custom_prompt would look like when the placeholders are replaced with actual values.

Here's the corrected code:

Plain Text
from llama_index.core import PromptTemplate

# Define your custom prompt with the new variable
custom_prompt_str = (
    "You are an expert Q&A system that is trusted around the world.\n"
    "Your description as an agent is as follows: \n"
    "{{chatbot_description}}\n"
    "Always answer the query using the provided context information, and not prior knowledge.\n"
    "Some rules to follow:\n"
    "Never directly reference the given context in your answer.\n"
    "Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n"
    "\n"
    "Context information is below.\n"
    "---------------------\n"
    "{{context_str}}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, answer the query.\n"
    "Query: {{query_str}}\n"
    "Answer: "
)

# Create a PromptTemplate with your custom prompt
custom_prompt = PromptTemplate(custom_prompt_str)

# Update the prompts of your query engine
query_engine.update_prompts(
    {"response_synthesizer:text_qa_template": custom_prompt}
)

# Now you can make a query with your query engine
query_str = "This is my query."
response = query_engine.query(query_str)

# Print the response
print(str(response))
In this code, the custom_prompt is added to the query engine, and then the query engine uses this prompt when you make a query with query_engine.query(query_str). The placeholders in the prompt ({{chatbot_description}}, {{context_str}}, and {{query_str}}) are replaced with the actual values when the query is made.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai I want to pass in the formatted prompt rather than the custom prompt. Do you get it?
Let me check the knowledge base for you. One moment! :books:
----------------------
I see, you want to pass a fully formatted prompt directly to the query engine. In this case, you would not be updating the prompt template in the query engine, but rather directly using the complete method of the LLM (Large Language Model) with your fully formatted prompt.

Here's how you can do it:

Plain Text
from llama_index.llms.openai import OpenAI

# Define your chatbot description, context string, and query string
chatbot_description = "I am an AI chatbot designed to answer complex queries."
context_str = "This is the context for my query."
query_str = "This is my query."

# Format the prompt with your chatbot description, context string, and query string
formatted_prompt = custom_prompt.format(chatbot_description=chatbot_description, context_str=context_str, query_str=query_str)

# Initialize the LLM
llm = OpenAI(model="gpt-3.5-turbo")

# Use the complete method of the LLM with your fully formatted prompt
response = llm.complete(formatted_prompt)

# Print the response
print(str(response))


In this code, llm is an instance of the Large Language Model. The complete method generates a completion for the given prompt.

Please note that this approach bypasses the query engine and directly interacts with the LLM. The query engine in LlamaIndex is designed to handle the retrieval and synthesis steps, which includes retrieving relevant documents from the knowledge base and synthesizing the final response. If you directly use the complete method of the LLM with a fully formatted prompt, you would be bypassing these steps.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai no I want to use query engine not llm complete.
Let me check the knowledge base for you. One moment! :books:
----------------------
I apologize for the confusion earlier. The LlamaIndex query engine is designed to handle the retrieval and synthesis steps, which includes retrieving relevant documents from the knowledge base and synthesizing the final response. The prompt templates used in these steps are typically not fully formatted prompts, but rather templates with placeholders that get replaced with actual values during the retrieval and synthesis steps.

If you want to use a fully formatted prompt with the query engine, you would need to modify the retrieval and synthesis steps to use your fully formatted prompt. However, this is not a typical use case and might require significant changes to the underlying code of the query engine.

As an alternative, you could consider updating the prompt template in the query engine with your custom prompt (not fully formatted), and then providing the values for the placeholders when you make a query. Here's how you can do it:

```python
from llama_index.core import PromptTemplate

Define your custom prompt with the new variable

custom_prompt_str = (
"You are an expert Q&A system that is trusted around the world.\n"
"Your description as an agent is as follows: \n"
"{{chatbot_description}}\n"
"Always answer the query using the provided context information, and not prior knowledge.\n"
"Some rules to follow:\n"
"Never directly reference the given context in your answer.\n"
"Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n"
"\n"
"Context information is below.\n"
"---------------------\n"
"{{context_str}}\n"
"---------------------\n"
"Given the context information and not prior knowledge, answer the query.\n"
"Query: {{query_str}}\n"
"Answer: "
)

Create a PromptTemplate with your custom prompt

custom_prompt = PromptTemplate(custom_prompt_str)

Update the prompts of your query engine

query_engine.update_prompts(
{{"response_synthesizer:text_qa_template": custom_prompt}}
)

Now you can make a query with your query engine

query_str = "This is my query."
response = query_engine.query(query_str)

Print the response

print(str(response))
`` In this code, the custom_prompt is added to the query engine, and then the query engine uses this prompt when you make a query with query_engine.query(query_str). The placeholders in the prompt ({{chatbot_description}}, {{context_str}}, and {{query_str}}`) are replaced with the actual values when the query is made.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord