update_prompts
function. This function allows you to customize prompts on any module that implements get_prompts
. You just need to pass in argument values with the keys equal to the keys you see in the prompt dictionary obtained through get_prompts
.from llama_index.core import PromptTemplate # Define your custom prompt custom_prompt_str = ( "Context information is below.\n" "---------------------\n" "{context_str}\n" "---------------------\n" "Given the context information and not prior knowledge, " "answer the query.\n" "Query: {query_str}\n" "Answer: " ) # Create a PromptTemplate with your custom prompt custom_prompt = PromptTemplate(custom_prompt_str) # Update the prompts of your query engine query_engine.update_prompts( {"response_synthesizer:text_qa_template": custom_prompt} )
query_engine
is your instance of the query engine. The key "response_synthesizer:text_qa_template"
is the key for the prompt you want to update. This key might be different depending on the specific prompt you want to customize.context_str
is a placeholder in your prompt template that gets replaced with the actual context when you format the prompt. It's a variable that you pass in when you call the format
method on your PromptTemplate
instance.# Define your context string context_str = "This is the context for my query." # Define your query string query_str = "This is my query." # Format the prompt with your context and query strings formatted_prompt = custom_prompt.format(context_str=context_str, query_str=query_str)
context_str
is the actual context that you want to use for your query. It could be any string that provides context for the query you're making. The query_str
is the actual query that you want to make.formatted_prompt
will be a string that includes your context and query in the places where {{context_str}}
and {{query_str}}
were in your original prompt template.context_str
is typically the information that provides context for the query you're making. It's usually a string that contains the information that the model needs to answer the query. context_str
is often the content retrieved from the knowledge base that is relevant to the query. For example, if you're using a retrieval-based model, the context_str
might be the documents or passages that the model retrieves based on the query.context_str
is the content of the retrieved nodes:# Retrieve nodes relevant to the query retrieved_nodes = vector_retriever.retrieve(query_str) # The context string is the content of the retrieved nodes context_str = "\n\n".join([n.get_content() for n in retrieved_nodes])
vector_retriever
is an instance of a vector retriever, and query_str
is the query that you're making. The retrieve
method retrieves nodes from the knowledge base that are relevant to the query, and the get_content
method gets the content of each node. The context_str
is then the content of all the retrieved nodes joined together with line breaks in between.context_str
, it will be whatever content the model retrieves from the knowledge base based on the query. If you want to add something to it, you can simply concatenate your additional information to the context_str
before passing it to the format
method.{chatbot_description}
. You then need to provide a value for this variable when you format the prompt.from llama_index.core import PromptTemplate # Define your custom prompt with the new variable custom_prompt_str = ( "You are an expert Q&A system that is trusted around the world.\n" "Your description as an agent is as follows: \n" "{chatbot_description}\n" "Always answer the query using the provided context information, and not prior knowledge.\n" "Some rules to follow:\n" "Never directly reference the given context in your answer.\n" "Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n" "\n" "Context information is below.\n" "---------------------\n" "{context_str}\n" "---------------------\n" "Given the context information and not prior knowledge, answer the query.\n" "Query: {query_str}\n" "Answer: " ) # Create a PromptTemplate with your custom prompt custom_prompt = PromptTemplate(custom_prompt_str) # Define your chatbot description, context string, and query string chatbot_description = "I am an AI chatbot designed to answer complex queries." context_str = "This is the context for my query." query_str = "This is my query." # Format the prompt with your chatbot description, context string, and query string formatted_prompt = custom_prompt.format(chatbot_description=chatbot_description, context_str=context_str, query_str=query_str)
chatbot_description
is the description of your chatbot that you want to include in the prompt. It could be any string that describes your chatbot.formatted_prompt
will be a string that includes your chatbot description, context, and query in the places where {chatbot_description}
, {context_str}
, and {query_str}
were in your original prompt template.update_prompts
function. Then, you can use the query
function of your query engine to make a query. Here's how you can do it:``
In this code,
query_engine is your instance of the query engine. The key
"response_synthesizer:text_qa_template"` is the key for the prompt you want to update. This key might be different depending on the specific prompt you want to customize.formatted_prompt
is not directly added to the query engine. Instead, the custom_prompt
(which is a PromptTemplate
object) is added to the query engine using the update_prompts
method. The formatted_prompt
is just an example of how the custom_prompt
would look like when the placeholders are replaced with actual values.from llama_index.core import PromptTemplate # Define your custom prompt with the new variable custom_prompt_str = ( "You are an expert Q&A system that is trusted around the world.\n" "Your description as an agent is as follows: \n" "{{chatbot_description}}\n" "Always answer the query using the provided context information, and not prior knowledge.\n" "Some rules to follow:\n" "Never directly reference the given context in your answer.\n" "Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\n" "\n" "Context information is below.\n" "---------------------\n" "{{context_str}}\n" "---------------------\n" "Given the context information and not prior knowledge, answer the query.\n" "Query: {{query_str}}\n" "Answer: " ) # Create a PromptTemplate with your custom prompt custom_prompt = PromptTemplate(custom_prompt_str) # Update the prompts of your query engine query_engine.update_prompts( {"response_synthesizer:text_qa_template": custom_prompt} ) # Now you can make a query with your query engine query_str = "This is my query." response = query_engine.query(query_str) # Print the response print(str(response))
custom_prompt
is added to the query engine, and then the query engine uses this prompt when you make a query with query_engine.query(query_str)
. The placeholders in the prompt ({{chatbot_description}}
, {{context_str}}
, and {{query_str}}
) are replaced with the actual values when the query is made.complete
method of the LLM (Large Language Model) with your fully formatted prompt.from llama_index.llms.openai import OpenAI # Define your chatbot description, context string, and query string chatbot_description = "I am an AI chatbot designed to answer complex queries." context_str = "This is the context for my query." query_str = "This is my query." # Format the prompt with your chatbot description, context string, and query string formatted_prompt = custom_prompt.format(chatbot_description=chatbot_description, context_str=context_str, query_str=query_str) # Initialize the LLM llm = OpenAI(model="gpt-3.5-turbo") # Use the complete method of the LLM with your fully formatted prompt response = llm.complete(formatted_prompt) # Print the response print(str(response))
llm
is an instance of the Large Language Model. The complete
method generates a completion for the given prompt.complete
method of the LLM with a fully formatted prompt, you would be bypassing these steps.``
In this code, the
custom_prompt is added to the query engine, and then the query engine uses this prompt when you make a query with
query_engine.query(query_str). The placeholders in the prompt (
{{chatbot_description}},
{{context_str}}, and
{{query_str}}`) are replaced with the actual values when the query is made.