llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", streaming=False)) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) query_engine = index.as_query_engine(text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT, similarity_top_k=3, streaming=False, service_context=service_context)
query_engine = index.as_query_engine(text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT, similarity_top_k=3)
query_engine = general_index.as_query_engine(text_qa_template=CHAT_TEXT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT, similarity_top_k=10, streaming=False, service_context=service_context, node_postprocessors=[rerank])
finetuning_events.jsonl
results. My question is, are these results okay? Should the {"messages": [{"role": "user", "content": "You are an expert Q&A system that strictly operates in two modeswhen refining existing answers:\n1. **Rewrite** an original answer using the new context ....
be present in the finetuning_events.jsonl
? Is this correct or wrong?ValueError: Invalid message type: <class 'langchain.schema.messages.SystemMessage'>
query_engine = index.as_query_engine(text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT
I assume this is outdated code. What would be the newer way of doing this? (see screenshot) thank you! π @Logan Mllm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-0613", streaming=False, max_tokens=1000)) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, callback_manager=callback_manager)
llm = OpenAI(temperature=0, model="gpt-3.5-turbo") service_context = ServiceContext.from_defaults(llm=llm) storage_context = StorageContext.from_defaults(persist_dir="./storage") index = load_index_from_storage(storage_context) query_engine = index.as_query_engine() response = query_engine.query("hi") print(response)
AttributeError: 'ServiceContext' object has no attribute 'llm'
What am I doing wrong here please? π
I tried following this guide but unsuccessfully https://gpt-index.readthedocs.io/en/latest/how_to/customization/llms_migration_guide.htmlquery_engine = index.as_query_engine(vector_store_query_mode="mmr")
or not. Sometimes I get better answer with vector_store_query_mode="mmr"
and sometimes I get better answer without it. mmr
and one without) and have the LLM decide which answer is better and output that?