Find answers from the community

Updated 7 months ago

I'm also getting rate limit errors about

I'm also getting rate limit errors about OpenAI. I'm trying to use Gemini because I have credits there, and I don't call OpenAI in any of my code, yet I'm getting errors about OpenAI still. Why is this happening?
W
1 comment
What embedding model are you using?
You can try defining llm and embedding models at the top.

Plain Text
from llama_index.core import Settings
Settings.llm=llm # your llm instance
Settings.embed_model = embed_model # your embed model instance

If you dont have these two it will fall back to default that is OpenAI
Add a reply
Sign up and join the conversation on Discord