Find answers from the community

Updated 3 months ago

Hello , <@334536717648265216> , I am experimenting with prompt techniques and had succ

Hello , ,

I am experimenting with prompt techniques and had success using the QueryEngine.

Here is the code I used:
retriever_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer
)
retriever_response = retriever_engine.get_prompts()
prompt = display_prompt_dict(retriever_response)

However, now I need to get the prompt template for agents.
I am using the following code:

FunctionCallingAgentWorker, AgentRunner
prompt_response = agent.get_prompts()

When it comes to the agent, I am unable to retrieve the prompt template.

Do you have any suggestions?
k
p
L
13 comments
Please stand by @pavan._01 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Sure, I can help with that.

From the knowledge sources, it seems like you're trying to retrieve the prompt template from an agent object. The method you're using, agent.get_prompts(), is correct for retrieving the prompts from an agent.

However, if you're not able to retrieve the prompt template, it could be due to a few reasons:

  1. The agent might not have any prompts set. You can set prompts using the agent.update_prompts() method. Here's an example from the knowledge source ():
Plain Text
react_system_prompt = PromptTemplate(react_system_header_str)
agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})


  1. The key you're using to access the prompt might be incorrect. Make sure you're using the correct key to access the prompt from the dictionary returned by get_prompts(). For example, in the knowledge source (), the key used is agent_worker:system_prompt.
  1. If you're trying to print or display the prompt template, you might need to access the template attribute of the prompt object. Here's an example from the knowledge source ():
Plain Text
prompt_dict = agent.get_prompts()
for k, v in prompt_dict.items():
    print(f"Prompt: {k}\n\nValue: {v.template}")


I hope this helps! Let me know if you have any other questions.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai , I used your above code but no use it is showing me an empty dict .

Code :-
prompt_dict = agent.get_prompts()
print(prompt_dict)
for k, v in prompt_dict.items():
print(f"Prompt: {k}\n\nValue: {v.get_template()}")

Output:-
{}
Let me check the knowledge base for you. One moment! :books:
----------------------
If you're getting an empty dictionary from agent.get_prompts(), it means that no prompts have been set for the agent.

In the LlamaIndex framework, you need to manually set the prompts for an agent using the agent.update_prompts() method. The prompts are not automatically generated or retrieved from an external source.

Here's an example of how you can set a prompt for an agent:

Plain Text
from llama_py import PromptTemplate

# Create a prompt template
prompt_template = PromptTemplate("Your prompt template here")

# Update the agent's prompts
agent.update_prompts({"agent_worker:system_prompt": prompt_template})


After running this code, you should be able to retrieve the prompts using agent.get_prompts().

Please replace "Your prompt template here" with the actual prompt template you want to use. The key "agent_worker:system_prompt" is just an example, you should replace it with the actual key for the prompt you want to set.

If you're still having trouble, could you provide more information about the agent you're using? That would help me give a more specific answer.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai , Same output . Let me give you my full code .

Full code :-

llm = OpenAI(model="gpt-3.5-turbo")
agent_worker = FunctionCallingAgentWorker.from_tools(
tools=[vector_tool],
llm=llm,
verbose=True,
system_prompt="""
You are helping to construct an agent given a user-specified task.
You should generally use the tools in this rough order to build the agent."
Please use the provided tools to answer questions and summarize relevant information.
Do not rely on prior knowledge.

This will be a back and forth conversation with the user. You should
continue asking users if there's anything else they want to do until
they say they're done.
"""
)
retrieve_agent = AgentRunner(agent_worker)
prompt_dict = retrieve_agent.get_prompts()

for k, v in prompt_dict.items():
print(f"Prompt: {k}\n\nValue: {v.get_template()}")
Let me check the knowledge base for you. One moment! :books:
----------------------
From your code, it seems like you're setting the system_prompt while initializing the FunctionCallingAgentWorker. However, the system_prompt is not a prompt template that can be retrieved using get_prompts(). It's a string that's used internally by the agent worker.

The get_prompts() method is used to retrieve prompt templates that have been set using the update_prompts() method. These are usually used for customizing the prompts that the agent uses during its operation.

If you want to retrieve the system_prompt that you set, you can access it directly from the agent worker:

Plain Text
print(agent_worker.system_prompt)


If you want to set and retrieve a prompt template, you can do it like this:

Plain Text
from llama_py import PromptTemplate

# Create a prompt template
prompt_template = PromptTemplate("Your prompt template here")

# Update the agent's prompts
retrieve_agent.update_prompts({"agent_worker:system_prompt": prompt_template})

# Now you can retrieve the prompt
prompt_dict = retrieve_agent.get_prompts()

for k, v in prompt_dict.items():
    print(f"Prompt: {k}\n\nValue: {v.template}")


Please replace "Your prompt template here" with the actual prompt template you want to use. The key "agent_worker:system_prompt" is just an example, you should replace it with the actual key for the prompt you want to set.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
It would be great if @WhiteFang_Jr could give me some suggestions ?
Function calling agents do not have a prompt template, its all chat history
Hello @Logan M ,

Thank you for your response.

If I want to implement the STEP-BACK PROMPTING technique, should I make changes to the system prompt in [FunctionCallingAgentWorker], or should the modifications be made within the response synthesizer as shown below ?


multi_query_engine = MultiStepQueryEngine(
query_engine=query_engine,
query_transform=step_decompose_transform_gpt3,
index_summary=index_summary,
response_synthesizer=get_response_synthesizer(
text_qa_template=text_qa_template,
refine_template=refine_template,
)
)

vector_tool = QueryEngineTool.from_defaults(
query_engine=multi_query_engine,
description= "Use this tool by entering a detailed, plain text question. The tool leverages semantic search capabilities with the vector store to provide accurate and relevant results based on your input."
)

I appreciate your guidance on this matter.
@Logan M , My ultimate goal is to implement STEP-BACK PROMPTING and other prompting techniques for my agent.

Your guidance and suggestions would be greatly appreciated.
I think those changes would go inside an agent, if im remembering this prompting technique properly
Add a reply
Sign up and join the conversation on Discord