Find answers from the community

Updated 3 months ago

I am using react chat engine and want to

I am using react chat engine and want to override MessageRole.System when submitting a query. E.g. default from llama_debug
Plain Text
ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\nYou are designed to help with a variety of tasks, from answering questions...
L
e
5 comments
Hmm, never done this before, but looking at the code it should be possible

The react agent takes an argument called react_chat_formatter
https://github.com/jerryjliu/llama_index/blob/6e9a7b41db1adc2eb8b29be0b650b6842799ca28/llama_index/agent/react/base.py#L70

This is a specific class defined here
https://github.com/jerryjliu/llama_index/blob/6e9a7b41db1adc2eb8b29be0b650b6842799ca28/llama_index/agent/react/formatter.py#L47

As you can see, you can change the system_header to be your own instructions

Plain Text
formatter = ReActChatFormatter(system_header="my header")
agent = ReActAgent.from_tools(..., react_chat_formatter=formatter)


The default system header is here. You'll probably only want to make small adjustments to it
https://github.com/jerryjliu/llama_index/blob/6e9a7b41db1adc2eb8b29be0b650b6842799ca28/llama_index/agent/react/prompts.py#L7
@Logan M As always, your help is much appreciated! I need your advice on something. I'm looking into overriding the messageRole.SYSTEM to ensure the chat engine I'm using only answers questions related to the context without using prior knowledge. I only want the chat engine to answer questions about the indexed docs. Testing purely with chat modes (react, condense_question, openai, context) when I rely on custom qa prompts / system prompts the engine will eventually begin answering quesitons outside of the knowledge base (context) even though the instructions prohibit it. I recently removed langchain from my app and am hoping to accomplish this using only llamaindex. Can you point me in the right direction?
Sounds like you just need to add a more strict system prompt? There is no way really to do this with 100% certainty since it's all prompt engineering.

For all other chat engines besides react, setting the system prompt is waaaaay easier.

Plain Text
chat_engine = index.as_chat_engine(chat_mode='openai', system_prompt="my system prompt")


or from scratch

Plain Text
agent = OpenAIAgent,from_tools(..., system_prompt="my system prompt")
@Logan M I started testing with context and using the system prompt. I put in every type of message to instruct the chat to not answer questions with prior knowledge but eventually it always begins answering questions and ignoring the prompt. I call chat.reset() after every response so I'm puzzled why this would happen.
I do pass in the last 5 interactions via chat_history and even that includes a standing message that instructs the chat to not use prior knowledge.
Add a reply
Sign up and join the conversation on Discord