Find answers from the community

Updated last year

Does anyone know how to change the

At a glance
The community members are discussing how to change the system prompt of the default query engine in the Llama Index library. They suggest using the ServiceContext.from_defaults method to set the system prompt, and provide examples of how to do this. Some community members also mention modifying the response synthesis part of the pipeline to change the system prompt. The community members reference the Llama Index documentation, which they say does not cover this basic setup in detail.
Useful resources
Does anyone know how to change the system prompt of the default query engine?
b
A
r
23 comments
oh system prompt.
can add that into ServiceContext.from_defaults
system_prompt: Optional[str] = None,
I'm a total newcomer to llama, so I'm sorry if it doesn't make sense, but here
Plain Text
`from llama_index.prompts import ChatPromptTemplate, ChatMessage, MessageRole

message_templates = [
    ChatMessage(content="You are an expert system.", role=MessageRole.SYSTEM),
    ChatMessage(
        content="Generate a short story about {topic}",
        role=MessageRole.USER,
    ),
]

there still is a system prompt
but I don't quite understand how to pass or use the message_templates in the index
Let's say you wanted to do the chat message above

Plain Text
service_context = ServiceContext.from_defaults( llm=llm, system_prompt="Generate a short story about.." )


and if you have topic at that point you can concat it into string
https://docs.llamaindex.ai/en/stable/understanding/querying/querying.html
In the docs they offer a couple settings, but don't go into detail of modifying the response synthesis
and then I use it w/ my index like that query_engine = index.as_query_engine(service_context=service_context), ok, understood.
Do you know where I can find the kwargs for the .from_defaults constructor?
But there's no system_prompt nor temperature setting, if I'm not missing it
there is a system_prompt
temperature isn't in service context that in in your llm
ok, but on the other hand, if I set the system_prompt through service_context, it will be used in all parts of the pipeline, won't it?
I'm looking for ways to modify the system prompt only in the response synthesis part. If I won't find a way to do this trough llama, the current plan is to use no_text response mode, and to just use the nodes after postprocessing manually.
But I really prefer to keep it in llama and to keep it as close to a default setup as possible
I think thats something you are looking for.
Thanks for your information, I think I got the answer, basic setup for custom prompts and tempreature, if anyone will be searching:
Plain Text
    from llama_index.llms import OpenAI
    refine_templ = "My custom refin prompt..."
    SYSTEM_PROMPT = "My custom system prompt..."
    qa_prompt = PromptTemplate(refine_templ)
    llm = OpenAI(temperature=1, model="gpt-3.5-turbo",max_tokens=2048)
    service_context = ServiceContext.from_defaults(system_prompt=SYSTEM_PROMPT,llm=llm)
    response_synthesizer = get_response_synthesizer(service_context=service_context,refine_template=qa_prompt)
    query_engine = index.as_query_engine(similarity_top_k=1,response_synthesizer=response_synthesizer)
    response = query_engine.query(message)
Yes. You got that.
Very unfortunate that the docs don't cover a basic setup like this😕
Add a reply
Sign up and join the conversation on Discord