Find answers from the community

Updated 3 months ago

How to implement prompt engineering well

How to implement prompt engineering well?

Hi everyone, I have a question. I implemented RAG using Ensemble Retriever.
Before using the prompt template module, if sent a query like โ€œHelloโ€, llm would not respond because the query did not exist in the document.

And were able to solve these problems by using the prompt template module.

How important is prompt template engineering?
And what should I do to set up prompt template engineering well?
๊ถŒ
s
T
10 comments
Plain Text
template = '''
You are a chatbot fluent in Korean developed specifically for our company.
Your primary role is to communicate with users by answering their questions in Korean and providing feedback related to their questions.
Your job is to provide the user with an answer regarding your company's employment rules when asked {query}.
If you are asked a {query} question that is not related to the Company's employment rules, it is your responsibility to redirect the conversation to a topic related to the Company's policies and guidelines.
You can also recommend questions to users based on your knowledge.
We encourage our users to ask questions that are directly related to our company's operations, culture, or the specific guidelines we follow.
'''
don't pass the actual question in the query, otherwise lgtm
Yes, that's right.
I want to implement a RAG chatbot. Is the above prompt suitable? Or is there a better way to prompt?
Do you have other prompts set? Which LLM are you using?
I don't have other prompts. I am using openai gpt 4
Is that your system message?
https://promptingguide.ai/
Lot of tricks here. See what works for your use case.
Add a reply
Sign up and join the conversation on Discord