The following is my code: 1. How can I customize the call to the model in new_index.query? 2. How can I call the gpt-3.5-turbo model to initiate context-aware questions?
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo',max_tokens=num_output)
llm_predictor = LLMPredictor(llm=llm)
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor,chunk_size_limit=500)
index.save_to_disk('index.json')
QA_PROMPT_TMPL = (
"We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, output the original answer again."
"Given this information, please answer the question: {query_str}\n"
)
QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
new_index = GPTSimpleVectorIndex.load_from_disk('index.json')
response = new_index.query("who are you?", text_qa_template=QA_PROMPT)