Hitting OpenAI's RateLimitError. Any workarounds to this?
Alternatively, I suppose I could chunk and schedule the attempt to extract metadata, and schedule the task if the RateLimit is hit.
RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-0g4tBZCHdpzT3CoGj0Hdrlo0 on requests per day. Limit: 200 / day. Please try again in 7m12s. Contact us through our help center at help.openai.com if you continue to have issues.
xD It's kinda ridiculous. Actually, 200*10 RPD if I chunk the nodes and run them concurrently across the GPT 3.5+4 Family. Was mildly hoping to see if somebody else has figured out a better way.
I'm not sure if I want to dip into my Azure credits (RPDs / RPMs / TPMs are much better) as yet, 'Cause I have 2.5k in OpenAI that's set to expire in November.