Find answers from the community

Updated 2 months ago

Reading the source code of synthezing

Reading the source code of synthezing and please correct me if I'm wrong.
It seems like query_engine calls predict when synthezing,
Is it that query_engine calls response = self._llm.complete(formatted_prompt)
where as chat_engine calls chat_response = self._llm.chat(messages) instead?
` def predict( self, prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any, ) -> str: """Predict.""" self._log_template_data(prompt, **prompt_args) if output_cls is not None: output = self._run_program(output_cls, prompt, **prompt_args) elif self._llm.metadata.is_chat_model: messages = prompt.format_messages(llm=self._llm, **prompt_args) messages = self._extend_messages(messages) chat_response = self._llm.chat(messages) output = chat_response.message.content or "" else: formatted_prompt = prompt.format(llm=self._llm, **prompt_args) formatted_prompt = self._extend_prompt(formatted_prompt) response = self._llm.complete(formatted_prompt) output = response.text
L
c
6 comments
yea, llm.predict() will use llm.complete() or llm.chat() depending on if the LLM is a chat model or not
how do you determine something is a chat model or not?
I need to confirm this on Azure OpenAI GPT-4
looking at the debug logs
Plain Text
'/deployments/my-deployed-gpt-4-model/chat/completions'

I'm guessing since it has chat that means it'll use the llm.chat() ?
my understanding is the llm.complete / completions api should be kinda deprecated and OpenAI / Azure OpenAI has moved on to chat
llm.metadata.is_chat_model
openai is using llm.chat() yes
Add a reply
Sign up and join the conversation on Discord