----------------------
To set a prompt template for a model like Microsoft's phi-2, you can use the
PromptTemplate
class from the
llama_index.core
module. Here's an example of how to define a custom prompt:
from llama_index.core import PromptTemplate
template = (
"We have provided context information below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given this information, please answer the question: {query_str}\n"
)
qa_template = PromptTemplate(template)
# you can create text prompt (for completion API)
prompt = qa_template.format(context_str=..., query_str=...)
# or easily convert to message prompts (for chat API)
messages = qa_template.format_messages(context_str=..., query_str=...)
(Source:
LlamaIndex Documentation ())
To pass this to LlamaCPP, you would use the
messages_to_prompt
and
completion_to_prompt
parameters when initializing the LlamaCPP model. Here's an example:
from llama_index.llms.llama_cpp import LlamaCPP
llm = LlamaCPP(
model_url=model_url,
model_path=None,
temperature=0.1,
max_new_tokens=256,
context_window=3900,
generate_kwargs={},
model_kwargs={"n_gpu_layers": 1},
messages_to_prompt=messages_to_prompt, # pass your prompt here
completion_to_prompt=completion_to_prompt, # pass your prompt here
verbose=True,
)
(Source:
LlamaIndex GitHub ())