Find answers from the community

Updated 10 months ago

from llama_index.finetuning import (

from llama_index.finetuning import (
generate_qa_embedding_pairs,
EmbeddingQAFinetuneDataset,
)

train_dataset = generate_qa_embedding_pairs(nodes, llm=llm_model)

I was having issues with using azurechatopenai model with generate_qa_embedding_pairs function. Anyway that I can get around this?
Attachment
Screenshot_2024-02-28_at_4.47.33_PM.png
W
G
L
8 comments
How have you imported AzureChatOpenAI? i think this si not coming from llama-index
I'm importing from
Plain Text
from langchain.chat_models import AzureChatOpenAI
Ah that's why! Import AzureOpenAI from llama-index, llm internally perfomr API call for which it does llm.complete which is not present in langchain implementation
First do this: pip install llama-index-llms-azure-openai

and then import it like this:
from llama_index.llms.azure import AzureOpenAI
I'm getting this error
Attachment
image.png
Are you on 0.9 version ? maybe version mismatch
Yea, this was solved in newer versions
Add a reply
Sign up and join the conversation on Discord