Find answers from the community

Updated 3 months ago

Rate limit

Hi everyone, I have a question about openAI, I am currently using the free credit since I am still in testing phase and today I am having a problem. I use "gpt-3.5-turbo" for the queries and today it started tellig me "Rate limit reached for default-gpt-3.5-turbo" and it says that the limit is 3 requests / min. I have never had this error before and I always used the same account with same organisation and same key with a lot more than 3 requests/min. Does it comes from a openAI update or something?
L
e
T
4 comments
You'll have to add payment info. 3 requests/min is super strict, not sure how they decide when to hit people with that

You can set spending limits though on your account. I have mine sent to $20/month lol
How do you use gpt-3.5-turbo instead of the default? I did this but my code seems still running davinci-003:

service_context = ServiceContext.from_defaults(
llm=OpenAI(
model='gpt-3.5-turbo',
),
)

docindex = ListIndex(nodes, service_context=service_context)
Missing a few lines of code

Plain Text
from langchain.chat_models import ChatOpenAI
from llama_index import LLMPredictor

service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)))
Oh ok so it is a new thing😱
But I have just seen that gpt-3.5-turbi-0613 has a limit of 60 request per minute, that should work for testing. I am currently using that. but if you are a pay-as-you-go user the limit is stil 60 for chat for the first 48 hours and then 3500 RPM or 90000 TPM.
Add a reply
Sign up and join the conversation on Discord