Find answers from the community

Updated 3 months ago

sorry is this how you change the models

sorry is this how you change the models prompt template lol


Plain Text
from llama_index.prompts import PromptTemplate

# Define your custom prompt format
template = (
    "<|system|>\n"
    "{System}\n"
    "<|user|>\n"
    "{User}\n"
    "<|assistant|>\n"
    "{Assistant}"
)
# Create a PromptTemplate with your custom format
custom_prompt = PromptTemplate(template)
L
D
7 comments
nope πŸ™‚

Our LLMs all have a messages_to_prompt and completion_to_prompt hook

You can specify a function that will transform either messages or a normal string prompt into a format suitable for the LLM you are using

for example

Plain Text
def completion_to_prompt(completion):
  return f"<|system|>\n</s>\n<|user|>\n{completion}</s>\n<|assistant|>\n"

def messages_to_prompt(messages):
  prompt = ""
  for message in messages:
    if message.role == 'system':
      prompt += f"<|system|>\n{message.content}</s>\n"
    elif message.role == 'user':
      prompt += f"<|user|>\n{message.content}</s>\n"
    elif message.role == 'assistant':
      prompt += f"<|assistant|>\n{message.content}</s>\n"

  # ensure we start with a system prompt, insert blank if needed
  if not prompt.startswith("<|system|>\n"):
    prompt = "<|system|>\n</s>\n" + prompt

  # add final assistant prompt
  prompt = prompt + "<|assistant|>\n"

  return prompt

llm = HuggingFaceLLM(
    model_name="HuggingFaceH4/zephyr-7b-beta",
    tokenizer_name="HuggingFaceH4/zephyr-7b-beta",
    ....,
    messages_to_prompt=messages_to_prompt,
    completion_to_prompt=completion_to_prompt
)
@Logan M would it work if we were using vllm
lemme check if vllm takes advantage of those attributes
:sigh: it does and it doesnt
workaround --

Plain Text
llm = VllmServer(...)
llm.metadata.is_chat_model = True


This will force it to use messages_to_prompt

completion_to_prompt is missed for whatever reason
Add a reply
Sign up and join the conversation on Discord