Find answers from the community

Updated 3 months ago

Hi, for my chat_engine I have set up my

Hi, for my chat_engine I have set up my prompt like this: "You are a chatbot that answers questions about "company" models. \
Instructions:
  1. Never provide information that is not literally in the knowledge base.\
  2. If the answer cannot be found in the knowledge base tell the user that the\
    provided context does not tell you anything about the subject in question.\
    1. Always provide a link to the page that is used to generate the answer."
However, in some cases the chat_engine still uses information that is not in the context. Can someone help me with debugging?
T
T
2 comments
Is this in your system prompt? Also which LLM are you using? Those will play a difference here. In general 3.5 will follow the user prompt a lot better
This is how the chat engine is set up
Add a reply
Sign up and join the conversation on Discord