Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
Are there any helpers in Llama index
Are there any helpers in Llama index
Inactive
0
Follow
N
Niels
7 months ago
Β·
Are there any helpers in Llama index that we can utilize to check for rate limits for specific LLMs?
We are trying to utilize our Azure OpenAI rate limits to the fullest and we want to use OpenAI directly as a backup solution, would this be possible?
L
N
6 comments
Share
Open in Discord
L
Logan M
7 months ago
hmm, there are already retries for rate limits. Although these aren't exactly user-configurable yet, been meaning to make this more configurable.
Beyond that, its just try/except and handling from there
L
Logan M
7 months ago
https://github.com/run-llama/llama_index/blob/83c8e78f51a06430588beb935bea448239c5a106/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/utils.py#L138
L
Logan M
7 months ago
Which then gets used here
https://github.com/run-llama/llama_index/blob/83c8e78f51a06430588beb935bea448239c5a106/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py#L80
L
Logan M
7 months ago
(azure extends the openai class, so it gets these retries out of the box)
N
Niels
7 months ago
Cool, thanks π So if we want to switch LLMs after hitting the rate limit too many times we should just catch the specific rate limit error (i dont even know if it has its own exception class)?
L
Logan M
7 months ago
Yea that would be the approach I take. I thiiiink it should raise a specific error class (either something about retry or rate limits)
Add a reply
Sign up and join the conversation on Discord
Join on Discord