Find answers from the community

G
Guru
Offline, last seen 3 months ago
Joined September 25, 2024
Hi All...several imports are failing such as "from llama_index.core import download_loader"
4 comments
L
W
When I use Legacy ingestion pipeline it works fine, Here is the code wherein I have used Legacy ingestion pipeline. Seems to be an issue with the latest version. Code-"
from llama_index.legacy.ingestion import IngestionPipeline
#from llama_index.core.ingestion import IngestionPipeline
transformations = [
SentenceSplitter(),
TitleExtractor(nodes=5),
QuestionsAnsweredExtractor(questions=3),
]

pipeline = IngestionPipeline()
pipeline.transformations = transformations

nodes = pipeline.run(documents=split_docs)"
4 comments
L
G
When I use AzureOpenAI and pass it as llm to query_engine, I get the following error please help "ValueError: Cannot use llm_chat_callback on an instance without a callback_manager attribute."
1 comment
L
G
Guru
·

AzureOpenAI

Hi Guys...I am using Azure OpenAI in the below code and getting error message "InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>".
Below is the code- llm = AzureOpenAI(
model="gpt-35-turbo",
model_name="text-davinci-003",
deployment_id=os.getenv("OPENAI_API_DEPLOYMENT_NAME"),
api_key=os.getenv("OPENAI_API_KEY"),
api_base=os.getenv("OPENAI_API_BASED"),
api_type=os.getenv("OPENAI_API_TYPE"),
api_version=os.getenv("OPENAI_API_VERSION"),
)

service_context = ServiceContext.from_defaults(
llm=llm,
)
set_global_service_context(service_context)

node_parser = SimpleNodeParser.from_defaults(
text_splitter=text_splitter,
metadata_extractor=metadata_extractor
)


ls_spilt_document=[]
for document_page in doc_patient_visit:
ls_spilt_document=markdown_splitter.split_text(document_page.page_content)
for split_index in range(len(ls_spilt_document)):
if "Patient Demographics" in list(ls_spilt_document[split_index].metadata.keys()):
document = Document(text=ls_spilt_document[split_index].page_content)
nodes = node_parser.get_nodes_from_documents([document],show_progress=True)"
21 comments
G
L
W