When I use Legacy ingestion pipeline it works fine, Here is the code wherein I have used Legacy ingestion pipeline. Seems to be an issue with the latest version. Code-" from llama_index.legacy.ingestion import IngestionPipeline #from llama_index.core.ingestion import IngestionPipeline transformations = [ SentenceSplitter(), TitleExtractor(nodes=5), QuestionsAnsweredExtractor(questions=3), ]
When I use AzureOpenAI and pass it as llm to query_engine, I get the following error please help "ValueError: Cannot use llm_chat_callback on an instance without a callback_manager attribute."
Hi Guys...I am using Azure OpenAI in the below code and getting error message "InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>". Below is the code- llm = AzureOpenAI( model="gpt-35-turbo", model_name="text-davinci-003", deployment_id=os.getenv("OPENAI_API_DEPLOYMENT_NAME"), api_key=os.getenv("OPENAI_API_KEY"), api_base=os.getenv("OPENAI_API_BASED"), api_type=os.getenv("OPENAI_API_TYPE"), api_version=os.getenv("OPENAI_API_VERSION"), )
ls_spilt_document=[] for document_page in doc_patient_visit: ls_spilt_document=markdown_splitter.split_text(document_page.page_content) for split_index in range(len(ls_spilt_document)): if "Patient Demographics" in list(ls_spilt_document[split_index].metadata.keys()): document = Document(text=ls_spilt_document[split_index].page_content) nodes = node_parser.get_nodes_from_documents([document],show_progress=True)"