Find answers from the community

Updated 8 months ago

I'm also getting rate limit errors about

At a glance

The community member is experiencing rate limit errors related to OpenAI, even though they are not directly calling OpenAI in their code. They are trying to use Gemini instead, but are still encountering the OpenAI-related errors. Another community member suggests that the issue may be related to the embedding model being used, and recommends defining the llm and embed_model instances at the top of the code to avoid falling back to the default OpenAI model.

I'm also getting rate limit errors about OpenAI. I'm trying to use Gemini because I have credits there, and I don't call OpenAI in any of my code, yet I'm getting errors about OpenAI still. Why is this happening?
W
1 comment
What embedding model are you using?
You can try defining llm and embedding models at the top.

Plain Text
from llama_index.core import Settings
Settings.llm=llm # your llm instance
Settings.embed_model = embed_model # your embed model instance

If you dont have these two it will fall back to default that is OpenAI
Add a reply
Sign up and join the conversation on Discord