Find answers from the community

Updated 2 years ago

Prompt templatinh

At a glance
The following is my code: 1. How can I customize the call to the model in new_index.query? 2. How can I call the gpt-3.5-turbo model to initiate context-aware questions?
Plain Text
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo',max_tokens=num_output)
llm_predictor = LLMPredictor(llm=llm)
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor,chunk_size_limit=500)
index.save_to_disk('index.json')

QA_PROMPT_TMPL = (
    "We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, output the original answer again."
"Given this information, please answer the question: {query_str}\n"
)
QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
new_index = GPTSimpleVectorIndex.load_from_disk('index.json')
response = new_index.query("who are you?", text_qa_template=QA_PROMPT)
L
1 comment
Hey!

Your QA prompt mentions refinement, but actually, you'll need to create and pass in a separate refine_template for that.

The text_qa_template is only used on the first LLM call, before its gotten a previous answer.

For details on the refine prompt, check out the current default refine prompt for chatgpt
https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/chat_prompts.py
Add a reply
Sign up and join the conversation on Discord