Find answers from the community

Updated 4 months ago

I’m using OpenAILike class with vLLM.

At a glance

The community members are discussing the use of the OpenAILike class with a language model. One community member is experiencing an issue where the output includes the "System:" prefix, even when a system prompt is not provided. Another community member suggests that this may be due to the is_chat_model setting being set to False. The solution proposed is to use a custom messages_to_prompt function to format the input to the language model, which can provide more control over the prompt formatting.

Useful resources
I’m using OpenAILike class with vLLM.
L
T
10 comments
yea, use that messages to prompt hook -- this will basically give you universal control on the LLM input
I might be using it wrong with the chat engine, but when I’m using the OpenAI like with tout the is chat model, even if the system prompt is empty it will send the message as follow
‘System:
Message body’
Is there a way to get rid of this system: at the beginning of every message ?
I’m thinking that it’s from the LLM metadata when none is being passed to the OpenAI like class ?
Hmm, will have to check this. That typically happens because is_chat_model is being set to false ?
Yes ! Even when I’m passing a system prompt after it still does
system: {system prompt}
No I set is chat model to false
ah, then that would be why 👀 If want is_chat_model=False, but want more control over the formatting of the LLM prompt, you can set a function hook

Plain Text
# very naive formatting, this is what the default is doing essentially
def messages_to_prompt(messages):
  return "\n".join([str(x) for x in messages])

llm = OpenAILike(...., messages_to_prompt=messages_to_prompt)
Thanks I’ll try it and I’ll let you know !
Add a reply
Sign up and join the conversation on Discord