Find answers from the community

Updated 9 months ago

hello guys.

hello guys.
i have imeplementd a celery task in my back-end to extract metadata from my documents and to do that i am using SummaryExtractor with gpt-3.5-turbo as llm.
but it seems that celery async tasks cause rate limit errors .
is there tool in lllama-index can handle this type of errors?
L
2 comments
I think the latest openai class has built in handling for rate limit errors
pip install -U llama-index-llms-openai
Add a reply
Sign up and join the conversation on Discord