Find answers from the community

Updated 2 months ago

Any one mind sharing code for successful

Any one mind sharing code for successful use of embedding using AzureOpenAi ?
L
M
V
21 comments
oh wait, that skips the embeddings
Setting the global service context really simplifies things
Yes : see my code here : embedding_llm = LangchainEmbedding(
OpenAIEmbeddings(
model="text-embedding-ada",
deployment=DEPLOYMENT_NAME,
openai_api_key=openai.api_key,
openai_api_base=openai.api_base,
openai_api_type=openai.api_type,
openai_api_version=openai.api_version,
),
embed_batch_size=1,
)

service_context = ServiceContext.from_defaults(llm=AzureOpenAI(engine=DEPLOYMENT_NAME, temperature=0.0,
model="gpt-3.5-turbo"),
embed_model=embedding_llm)

index = GPTVectorStoreIndex(nodes=nodes, service_context=service_context)
but still get : openai.error.InvalidRequestError: The embeddings operation does not work with the specified model, gpt-35-turbo
here is mine:
Plain Text
embedding_llm = LangchainEmbedding(
    OpenAIEmbeddings(
        model="text-embedding-ada-002",
        deployment="learning",
        openai_api_key=openai.api_key,
        openai_api_base=openai.api_base,
        openai_api_type=openai.api_type,
        openai_api_version=openai.api_version,
    ),
    embed_batch_size=1,
)
.........
llm = AzureChatOpenAI(deployment_name=deployment, temperature=0.1, max_tokens=num_output, openai_api_version=openai.api_version, model_kwargs={
        "api_key": openai.api_key,
        "api_base": openai.api_base,
        "api_type": openai.api_type,
        "api_version": openai.api_version,
    })
    llm_predictor = LLMPredictor(llm=llm)

# Initialisation de l'outil qui définit quel llm est utilisé, quel embed, quelle taille en token il peut prendre au maximum, quelle taille en sortie

    service_context = ServiceContext.from_defaults(
        llm_predictor=llm_predictor,
        embed_model=embedding_llm,
        context_window=context_window,
        num_output=num_output,
    )
    set_global_service_context(service_context)
et ca marche pour toi >
@Vaylonn I think my issue with the deployment name , deployment="learning", in your example. Do you need to create that resource upfront in Azure ?
it's the deployement name of the embedding model
your azure openai studio
top man. Thank you!
@Vaylonn did you have to use 2 different Api keys for each model?
And if so any issues or recommendation in declaring them?
No only once
# Configuration de l'API OpenAI # Enlever les os.environ si déploiment sur azure et rajouter les valeurs openai.api_type = "azure" openai.api_version = "2023-07-01-preview" openai.api_base = os.environ["OPENAI_API_BASE"] = "https://xxxxxx.openai.azure.com/" openai.api_key = os.environ["OPENAI_API_KEY"] = "xxxxxx"
Add a reply
Sign up and join the conversation on Discord