ReActChatFormatter
class. This class formats the chat history into a list of ChatMessage
objects. Here's a basic example of how you can use it:from llama_index.core.agent import ReActChatFormatter from llama_index.core.llms import ChatMessage # Initialize the chat formatter chat_formatter = ReActChatFormatter() # Define your tools and chat history tools = [...] # Your tools go here chat_history = [ ChatMessage( content="Your message content here", role="user", ) ] # Format the chat history input_msgs = chat_formatter.format(tools, chat_history)
input_msgs
is the list of ChatMessage
objects that you can pass to the LLM. Each ChatMessage
object represents a message in the conversation and has a content
attribute (the message content) and a role
attribute (the role of the message sender, which can be "user" or "assistant").PromptTemplate
class and the agent.update_prompts()
method:from llama_index.core import PromptTemplate # Define your system prompt react_system_prompt = PromptTemplate("Your system prompt here") # Update the system prompt agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})
"Your system prompt here"
should be replaced with your actual system prompt.[...]
and "Your message content here"
with your actual tools and message content.agent.get_prompts()
method, and you can update the prompts using the agent.update_prompts()
method.# Get the current prompts prompt_dict = agent.get_prompts() for k, v in prompt_dict.items(): print(f"Prompt: {k}\n\nValue: {v.template}") # Update the prompts react_system_prompt = PromptTemplate("Your system prompt here") agent.update_prompts({"agent_worker:system_prompt": react_system_prompt})
"Your system prompt here"
should be replaced with your actual system prompt.agent.chat()
method.agent.chat()
method ()ChatMessage
objects and passing that list to the ReActChatFormatter
's format()
method. Each ChatMessage
object represents a message in the conversation.from llama_index.core.agent import ReActChatFormatter from llama_index.core.llms import ChatMessage # Initialize the chat formatter chat_formatter = ReActChatFormatter() # Define your tools and chat history tools = [...] # Your tools go here chat_history = [ ChatMessage( content="Your first message content here", role="user", ), ChatMessage( content="Your second message content here", role="assistant", ), # Add more messages as needed ] # Format the chat history input_msgs = chat_formatter.format(tools, chat_history)
input_msgs
is the list of ChatMessage
objects that you can pass to the LLM. Each ChatMessage
object has a content
attribute (the message content) and a role
attribute (the role of the message sender, which can be "user" or "assistant").[...]
and "Your first message content here"
and "Your second message content here"
with your actual tools and message content.ReActOutputParser
class. This class parses the output of the ReAct agent into a structured format.from llama_index.core.agent.react.output_parser import ReActOutputParser from llama_index.core.llms import ChatResponse from llama_index.core.agent.types import Task def parse_react_output_fn( task: Task, state: Dict[str, Any], chat_response: ChatResponse ): """Parse ReAct output into a reasoning step.""" output_parser = ReActOutputParser() reasoning_step = output_parser.parse(chat_response.message.content) return {"done": reasoning_step.is_done, "reasoning_step": reasoning_step} parse_react_output = AgentFnComponent(fn=parse_react_output_fn)
parse_react_output_fn
is a function that takes a task, a state, and a chat response, and returns a dictionary with two keys: "done"
(a boolean indicating whether the reasoning step is done) and "reasoning_step"
(the parsed reasoning step).Task
, Dict[str, Any]
, and ChatResponse
with your actual task, state, and chat response.PydanticOutputParser
with a ReAct agent, you would need to initialize the PydanticOutputParser
with your desired output class and then use it to parse the output from the ReAct agent. However, the provided knowledge sources do not provide a direct example of using PydanticOutputParser
with a ReAct agent.PydanticOutputParser
:from llama_index.core.output_parsers import PydanticOutputParser # Define your output class class MyOutputClass(BaseModel): # Define your fields here ... # Initialize the output parser output_parser = PydanticOutputParser(output_cls=MyOutputClass) # Use the output parser to parse the output parsed_output = output_parser.parse(output_text)
MyOutputClass
should be replaced with your actual output class, and output_text
should be replaced with the text output from the ReAct agent.