Find answers from the community

Updated last year

I'm stuck πŸ˜₯

At a glance
I'm stuck πŸ˜₯
Does anyone know how can I route the default llamaindex requests to my azure proxy service? I'd tried to to it as the following:

Plain Text
  const serviceContext = serviceContextFromDefaults({
    llm: new OpenAI({
      session: new OpenAISession({ baseURL: 'http://azureopenaiproxy.service/handler' }),
      temperature: 0.1,
    }),
  });

And I have account with 1000 tokens, but its seems like it is still routing to the default gpt api.
(The error is:
Plain Text
429 Rate limit reached for text-embedding-ada-002 in organization org-******** on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.'

)
W
1 comment
You can try increasing the batch size for embedding as you are getting this when embeddings process is taking place.

Also you have very low limit on trying text-embedding-ada-002 model.

Also since this is related to TS, This channel will be the best place to ask queries: https://discord.com/channels/1059199217496772688/1133167189860565033
Add a reply
Sign up and join the conversation on Discord