Find answers from the community

Updated 2 months ago

Are you setting a system prompt

Are you setting a system prompt somewhere? By default there isn't one in llama index
T
2 comments
yes I have this:

Plain Text
def init_query_engine(default_prompt_template, index, system_prompt, temp):
    max_iterations = MAX_ITERATION
    if default_prompt_template is False:
        max_iterations = 1

    chat_llm = ChatOpenAI(model_name="gpt-3.5-turbo-0301", temperature=temp, max_tokens=MAX_TOKENS)

    llm_predictor = ChatGPTLLMPredictor(llm=chat_llm,
                                        prepend_messages=get_system_prompt(system_prompt))
    aim_callback = AimCallback(repo="./")

    callback_manager = CallbackManager([aim_callback])

    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, callback_manager=callback_manager)

    query_engine = index.as_query_engine(similarity_top_k=SIMILARITY_TOP_K,
                                         service_context=service_context,
                                         max_iterations=max_iterations,
                                         text_qa_template=get_prompt_templates(default_prompt_template),
                                         refine_template=get_refine_prompt_template(default_prompt_template))
    return llm_predictor, query_engine
the get_system_prompt looks like this:
Plain Text
def get_system_prompt(system_prompt):
    prepend_messages = []
    if system_prompt is not None:
        # Escape curly braces, or they will be interpreted as format strings
        system_prompt = system_prompt.replace("{", "{{")
        system_prompt = system_prompt.replace("}", "}}")
        prepend_messages.append(SystemMessagePromptTemplate.from_template(system_prompt))
    return prepend_messages


I also tried something simpler:

service_context.llama_logger.get_logs()

Gets me the same prompt but with out the system prompt
Add a reply
Sign up and join the conversation on Discord