Find answers from the community

Updated 6 months ago

Vllm

Hello, does anyone know how to use prompt template with vllm?
https://docs.llamaindex.ai/en/stable/api_reference/llms/vllm/

It seems that messages_to_prompt and completion_to_prompt is added when loading up LLM, but you can't give few shot learning during runtime.
L
x
2 comments
I'm not sure what you mean?

messages_to_prompt and completion_to_prompt are used to provide function hooks that transform the input into a model specific format

If you want few-shot prompting, you should update the prompt for whatever specific module you are using
I want to directly ask question with the LLM, instead of going through the whole RAG pipeline that requires me to specify the retriever, text_qa_template, and response_synthesizer.

When I loaded the LLM using AutoModelForCasualLm, I could do the following:
Plain Text
template = """some text here"""
prompt_template = LCPromptTemplate(
  input_variables = ["text"]
  template=template
)
response = llm(prompt_template.format(text=question))

But when I loaded the LLM using Vllm, I got the following error:
'Vllm' object is not callable
Add a reply
Sign up and join the conversation on Discord