Find answers from the community

Updated 8 months ago

Quick question for prompt setup.

Quick question for prompt setup.
How would I go about wording the prompt template to set a specific context?
i.e when the user asks "How do I turn off an instance?" it can't respond based on the indices. But if the query is worded "How do I turn off an instance in AWS" it will generate a response. So I would need the prompt template to always assume the user queries are relating to AWS.
D
b
9 comments
one idea is to prefix the prompt:

Plain Text
const userInput = "how do i turn off an instance?"

const promptPrefix = "assume the following input is related to AWS:"

const prompt = `${promptPrefix} ${userInput}`;

console.log(prompt);


vanilla mistral gives me:

Plain Text
To stop an Amazon Elastic Compute Cloud (Amazon EC2) instance in Amazon Web Services (AWS), follow these steps:

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose "Instances."
3. Select the check box next to the name of the instance you want to stop. The Actions pane appears on the right side of the page.
4. Choose "Instance State > Stop Instances" from the Actions dropdown list.
5. In the Stop Instances dialog box, confirm that the correct instance is selected and review any associated warnings or notes. You can also 
choose an option for how to handle any associated Elastic IP addresses and IMDSv2 instance metadata options.
6. Choose "Yes, Stop Now" to stop the instance. The instance's state changes to "shutting down." Once the instance is no longer running, it will
be in the "stopped" state and you will not be charged for any usage.
could extend beyond AWS with something like:

Plain Text
const promptPrefix = `assume the following input is related to ${provider}:`


where provider is some state value
I'm using the qa_prompt_tmpl so it would be slightly different than this
qa_prompt_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
)
yes this is the same concept of prefixing the query (sorry JS is my goto!)

just above your block of code, i would modify query_str like this:

Plain Text
query_str = f"assume the following input is related to AWS: {query_str}"


or, if you want to make it more dynamic like above:

Plain Text
context_str = "AWS"

query_str = "how do i turn off an instance?"

prompt_prefix = f"assume the following input is related to {context_str}:"

qa_prompt_str = f"{prompt_prefix} {query_str}"
Looks like that's working, but not necessarily giving me the same answer as "How do I turn off an instance in AWS". It gives an answer, but not as detailed/precise. Here's what I did:

Plain Text
context_str = "AWS"
prompt_prefix = f"Assume the following input is related to {context_str}:"
qa_prompt_str = 
"You are User assistant. Context information is given below. \n"
"------------\n"
f"{prompt_prefix}\n"
"------------\n"
"Given the context information and not prior knowledge, "
"answer the query in detailed manner and point wise if required. \n"
"Query: {query_str}\n"
"Answer: "
)
Have you tried using the exact example I gave? The reason I say this is because in my (limited) experience with using different models, sometimes smalltalk in prompts kinda gets in the way. I wonder if it's getting hung up on some of this prompt's design
in a perfect world with a perfect model, your prompt is ideal, it's logical and well laid out, but I wonder if the LLM is interpreting it all
just giving f"{prompt_prefix} {query_str}" as the prompt
Add a reply
Sign up and join the conversation on Discord