Find answers from the community

Updated 3 months ago

how can I pass system and user messages to ReAct agent ?

@kapa.ai how can I pass system and user messages to ReAct agent ?
k
p
15 comments
Please stand by @praveenks30#120294 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To pass system and user messages to a ReAct agent, you can use the ReActChatFormatter class. This class formats the chat history into a list of ChatMessage objects. Here's a basic example of how you can use it:

Plain Text
from llama_index.core.agent import ReActChatFormatter
from llama_index.core.llms import ChatMessage

# Initialize the chat formatter
chat_formatter = ReActChatFormatter()

# Define your tools and chat history
tools = [...]  # Your tools go here
chat_history = [
    ChatMessage(
        content="Your message content here",
        role="user",
    )
]

# Format the chat history
input_msgs = chat_formatter.format(tools, chat_history)


In this example, input_msgs is the list of ChatMessage objects that you can pass to the LLM. Each ChatMessage object represents a message in the conversation and has a content attribute (the message content) and a role attribute (the role of the message sender, which can be "user" or "assistant").

You can also customize the system prompt by using the PromptTemplate class and the agent.update_prompts() method:

Plain Text
from llama_index.core import PromptTemplate

# Define your system prompt
react_system_prompt = PromptTemplate("Your system prompt here")

# Update the system prompt
agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})


In this example, "Your system prompt here" should be replaced with your actual system prompt.

Please note that you need to replace [...] and "Your message content here" with your actual tools and message content.

For more information, you can refer to the following sources:
  • ReActChatFormatter documentation ()
  • Example of using ReActChatFormatter ()
  • Example of customizing the system prompt ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Does ReAct agent take message Prompts or we can only pass the prompts ?
Let me check the knowledge base for you. One moment! :books:
----------------------
The ReAct agent in LlamaIndex does not directly take message prompts. Instead, it uses prompts as part of its internal workings to guide the conversation. You can retrieve the current prompts used by the agent using the agent.get_prompts() method, and you can update the prompts using the agent.update_prompts() method.

Here's an example of how you can get and update prompts:

Plain Text
# Get the current prompts
prompt_dict = agent.get_prompts()
for k, v in prompt_dict.items():
    print(f"Prompt: {k}\n\nValue: {v.template}")

# Update the prompts
react_system_prompt = PromptTemplate("Your system prompt here")
agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})


In this example, "Your system prompt here" should be replaced with your actual system prompt.

Please note that the prompts are used internally by the agent to guide the conversation and are not directly passed as input to the agent. The input to the agent is typically a string representing the user's message, which you can pass using the agent.chat() method.

For more information, you can refer to the following sources:

  • Example of getting and updating prompts ()
  • Example of using the agent.chat() method ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai ok so what if I have to pass multiple examples as input for the reference so how can I pass that to agent ?
Let me check the knowledge base for you. One moment! :books:
----------------------
If you want to pass multiple examples or messages as input to the agent, you can do so by creating a list of ChatMessage objects and passing that list to the ReActChatFormatter's format() method. Each ChatMessage object represents a message in the conversation.

Here's an example:

Plain Text
from llama_index.core.agent import ReActChatFormatter
from llama_index.core.llms import ChatMessage

# Initialize the chat formatter
chat_formatter = ReActChatFormatter()

# Define your tools and chat history
tools = [...]  # Your tools go here
chat_history = [
    ChatMessage(
        content="Your first message content here",
        role="user",
    ),
    ChatMessage(
        content="Your second message content here",
        role="assistant",
    ),
    # Add more messages as needed
]

# Format the chat history
input_msgs = chat_formatter.format(tools, chat_history)


In this example, input_msgs is the list of ChatMessage objects that you can pass to the LLM. Each ChatMessage object has a content attribute (the message content) and a role attribute (the role of the message sender, which can be "user" or "assistant").

Please note that you need to replace [...] and "Your first message content here" and "Your second message content here" with your actual tools and message content.

For more information, you can refer to the following sources:

  • ReActChatFormatter documentation ()
  • Example of using ReActChatFormatter ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How can I get structured output while working with ReAct agents ?
Let me check the knowledge base for you. One moment! :books:
----------------------
To get structured output while working with ReAct agents, you can use the ReActOutputParser class. This class parses the output of the ReAct agent into a structured format.

Here's an example of how you can use it:

Plain Text
from llama_index.core.agent.react.output_parser import ReActOutputParser
from llama_index.core.llms import ChatResponse
from llama_index.core.agent.types import Task

def parse_react_output_fn(
    task: Task, state: Dict[str, Any], chat_response: ChatResponse
):
    """Parse ReAct output into a reasoning step."""
    output_parser = ReActOutputParser()
    reasoning_step = output_parser.parse(chat_response.message.content)
    return {"done": reasoning_step.is_done, "reasoning_step": reasoning_step}

parse_react_output = AgentFnComponent(fn=parse_react_output_fn)


In this example, parse_react_output_fn is a function that takes a task, a state, and a chat response, and returns a dictionary with two keys: "done" (a boolean indicating whether the reasoning step is done) and "reasoning_step" (the parsed reasoning step).

Please note that you need to replace Task, Dict[str, Any], and ChatResponse with your actual task, state, and chat response.

For more information, you can refer to the following sources:

  • ReActOutputParser documentation ()
  • Example of using ReActOutputParser ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how to use PydanticOutputparse with ReAct agent ?
Let me check the knowledge base for you. One moment! :books:
----------------------
To use PydanticOutputParser with a ReAct agent, you would need to initialize the PydanticOutputParser with your desired output class and then use it to parse the output from the ReAct agent. However, the provided knowledge sources do not provide a direct example of using PydanticOutputParser with a ReAct agent.

Here's a general example of how you might use PydanticOutputParser:

Plain Text
from llama_index.core.output_parsers import PydanticOutputParser

# Define your output class
class MyOutputClass(BaseModel):
    # Define your fields here
    ...

# Initialize the output parser
output_parser = PydanticOutputParser(output_cls=MyOutputClass)

# Use the output parser to parse the output
parsed_output = output_parser.parse(output_text)


In this example, MyOutputClass should be replaced with your actual output class, and output_text should be replaced with the text output from the ReAct agent.

Please note that this is a general example and might need to be adapted based on the specifics of your use case and the structure of your output class.

For more information, you can refer to the following sources:

  • PydanticOutputParser documentation ()
  • Example of using PydanticOutputParser ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord