Find answers from the community

Updated 8 months ago

Prompting with Mixtral: I am following

Prompting with Mixtral: I am following the doc's for Evaluation and see the Default is like the below. Should I add [INST] to this prompt? Or am I supposed to modify the prompt when I set the llm? I am using vLLM (see below). Any help or ideas are appreciated. Thank you

DEFAULT_EVAL_TEMPLATE = PromptTemplate(
"Your task is to evaluate if the response is relevant to the query.\n"
"The evaluation should be performed in a step-by-step manner by answering the following questions:\n"
"1. Does the provided response match the subject matter of the user's query?\n"
"2. Does the provided response attempt to address the focus or perspective "
"on the subject matter taken on by the user's query?\n"
"Each question above is worth 1 point. Provide detailed feedback on response according to the criteria questions above "
"After your feedback provide a final result by strictly following this format: '[RESULT] followed by the integer number representing the total score assigned to the response'\n\n"
"Query: \n {query}\n"
"Response: \n {response}\n"
"Feedback:"
)

How I set up Mixtral....
local_llm = Vllm(
model="models/Mixtral-8x7B-Instruct-v0.1-GPTQ",
dtype="half",
tensor_parallel_size=2,
temperature=0,
max_new_tokens=250,
vllm_kwargs={
"swap_space": 1,
"gpu_memory_utilization": 0.70,
"max_model_len": 8000,
},
)
L
g
3 comments
I would use the messages_to_prompt and completion_to_prompt function hooks
Plain Text
def completion_to_prompt(completion):
  return f"[INST] {completion} [/INST] "

def messages_to_prompt(messages):
  messages_str = "\n".join([str(x) for x in messages])
  return completion_to_prompt(messages_str)

llm = Vllm(..., messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt)
awsome, thank you!
Add a reply
Sign up and join the conversation on Discord