is_chat_model=False
, but want more control over the formatting of the LLM prompt, you can set a function hook# very naive formatting, this is what the default is doing essentially def messages_to_prompt(messages): return "\n".join([str(x) for x in messages]) llm = OpenAILike(...., messages_to_prompt=messages_to_prompt)