Find answers from the community

Updated 3 months ago

I am getting this error:`openai.error.RateLimitError: Rate limit reached for gpt-3.5-turb

I am getting this error:
openai.error.RateLimitError: Rate limit reached for gpt-3.5-turbo in organization on tokens per min. Limit: 160000 / min. Current: 159190 / min. Contact us through our help center at help.openai.com if you continue to have issues.
I am creating an index:
vector_store = FaissVectorStore(faiss_index=faiss_index) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context) index.storage_context.persist("./webpage_data_index")
Can I use gpt-3.5-turbo-16k in this code? How can I configurate that
k
e
3 comments
Please stand by @elmatero as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Based on the information provided in the extracts, it seems you are hitting the rate limit for the gpt-3.5-turbo model. However, the extracts do not provide information on how to switch to the gpt-3.5-turbo-16k model or how to handle rate limit errors.

The extracts do show how to set up a ServiceContext with a specific model, as shown in this code snippet:

Plain Text
from llama_index.llms import OpenAI

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
service_context = ServiceContext.from_defaults(llm=llm)


In this code, the OpenAI class is instantiated with the model parameter set to "gpt-3.5-turbo". If gpt-3.5-turbo-16k is a valid model name, you could potentially replace "gpt-3.5-turbo" with "gpt-3.5-turbo-16k".

However, the extracts do not provide any information on whether gpt-3.5-turbo-16k is a valid model name, or how to handle rate limit errors. For more detailed information, you may need to refer to the official OpenAI and LlamaIndex documentation or contact OpenAI support.

Source: GitHub ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
yess, this didn't change anything
Add a reply
Sign up and join the conversation on Discord