Find answers from the community

Updated 3 months ago

Please help to configure for use with

Please help to configure for use with OpenAI @ Azure: this example is quite implicit and needs both embedding and gpt with OpenAI.
https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndex.ts
But, if I use env variables for the deployment name, I enforce just one endpoint.
R
T
14 comments
LlamaIndex supports AzureOpenai and AzureOpenAIEmbedding. Here's the documentation:
https://docs.llamaindex.ai/en/latest/examples/customization/llms/AzureOpenAI.html#
thx Rohan: correct, I can run this Python example. But I fail to transfer it to the context of the Typescript port.
oh sorry, I didn't notice you're using LLamaIndex TS, my bad

Set the following environment variables

Plain Text
AZURE_OPENAI_KEY=
AZURE_OPENAI_ENDPOINT=
AZURE_OPENAI_API_VERSION=
AZURE_OPENAI_DEPLOYMENT=
OPENAI_API_TYPE="azure"


So now when you instantiate OpenAI(), it will know that it should use Azure

Plain Text
import { OpenAI, serviceContextFromDefaults } from "llamaindex";

const openaiLLM = new OpenAI();
const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
Disclaimer: I haven't used Azure in the TS port myself, but according to the source code, this should work 🀞
on the first glance this should work.
But look, the service context goes to the index object and defines also the behaviour of the completion query.
How would the code beneath know, that for embedding calls it needs my model deployment #1, and for completion the model deployment #2? both are specific and I need to somehow communicate them.
use of the env variable is perfect for endpoint/key, by for the model deployment name, it overrides one of the two steps, leading to an error.
in python, they seem to enclose/wrap embed service context reference into the second service context reference
I see, so you mean something like this? different deployment name for llm and embedding?
Plain Text
const openaiLLM = new OpenAI({
    temperature: 0,
    azure: {
        deploymentName: "llmDeployment"
    }
});

const openaiEmbeds = new OpenAIEmbedding({
    azure: {
        deploymentName: "embedDeployment"
    }
});

const serviceContext = serviceContextFromDefaults({
    embedModel: openaiEmbeds,
    llm: openaiLLM,
});
yes, I am trying to configure this currently, just having no idea about both llamaindex and nodejs πŸ˜„
does this solve the issue then?
like getting rid of the error you're getting without the deployment name
yess it works! thanks for giving guidance! πŸ™‚
πŸ₯³ πŸŽ‰
sharing this in the corresponding issue. Maybe it will find its way to the examples folder.
Add a reply
Sign up and join the conversation on Discord