Hi,
I'm facing an issue when using ChatOpenAI with gpt-35.
It was working fine until yesterday but from the 0.6.4 version it seem it is broken.
openai.api_type = "azure"
openai.api_version = "2022-12-01"
openai.api_base = "...."
openai.api_key = "...."
deployment_name = "gpt-35-turbo"
llm = ChatOpenAI(model_name=deployment_name)
llm_predictor = LLMPredictor(llm=llm)
embedding_model = LangchainEmbedding(HuggingFaceInstructEmbeddings( model_name="hkunlp/instructor-xl", model_kwargs = {'device': 'cuda:1'}))
# Define prompt helper
prompt_helper = PromptHelper(max_input_size=max_input_size, num_output=num_output, max_chunk_overlap=CHUNK_OVERLAP_LLM, chunk_size_limit=max_input_size)
service_context = ServiceContext.from_defaults(llm_predictor, prompt_helper, embedding_model)
I'm using this with the latest GPT Document Summary index
response_synthesizer = ResponseSynthesizer.from_args(response_mode="tree_summarize", use_async=False)
doc_summary_index = GPTDocumentSummaryIndex.from_documents(documents, service_context=service_context, response_synthesizer=response_synthesizer)
And I am getting back the error
Must provide an 'engine' or 'deployment_id' parameter to create a <class
'openai.api_resources.completion.Completion'>
Similar to what is seen here:
https://github.com/jerryjliu/llama_index/issues/2129Is somebody facing the same issue?