Find answers from the community

Updated 2 months ago

Hello, I am testing the context chat

Hello, I am testing the context chat engine which works fine with OpenAI. To use it with local models, I wanted to adapt the prompt according to the specific prompt template which the model was trained on (as it is recommended) but I am not sure how to customize the full prompt of the chat engine (not just the system prompt). Are there any recommendations on how to customize the prompt for the context chat engine (with system prompt, context, history, query) to adapt it to the model’s prompt template? I know how to get the full prompt of a query engine and found a prompt template for the condense question engine but I cannot find out how it is supposed to work for the context chat engine where both context and chat history are passed to the model? Am I missing anything?
L
J
6 comments
So you are trying to customize the context chat engine?

really the only thing to customize is the context header thing
Plain Text
DEFAULT_CONTEXT_TEMPLATE = (
    "Context information is below."
    "\n--------------------\n"
    "{context_str}"
    "\n--------------------\n"
)

chat_engine = ContextChatEngine.from_defaults(..., context_template=DEFAULT_CONTEXT_TEMPLATE)
Exactly - don't I have to format my prompt in a particular way depending on which model I use?
If you are using a model that requires formatting (I.e most open source stuff) there are function hooks on the LLM class that allow you to do this. These get applied to every llm input

Which LLM/LLM class are you using?
I planned on testing different LLMs to get a feeling for them. As it is now, it is working well with Llama-2 and Mixtral so far, but I've tried LeoLM, for example, which starts with "assistant" before every answer and I wondered whether it would change if I could adapt the whole chat prompt template
The context chat engines don't have a template, its just the system prompt thingy above

The prompt hooks would give you more control

Plain Text
def completion_to_prompt(completion):
  return completion

def messages_to_prompt(messages):
  return "\n".join([str(x) for x in messages])

llm = <LLM Class>(..., completion_to_prompt=completion_to_prompt, messages_to_prompt=messages_to_prompt)


I still am not sure what LLM class you are using, but it should support the above hooks
Add a reply
Sign up and join the conversation on Discord