Find answers from the community

Updated 5 months ago

It seems like there have been some

At a glance
It seems like there have been some issues around 429 rate limit errors on here. I'm looking into it and most people say it's a user error on OpenAI side, but I have max level rates and should be nowhere near them. Here's the basic snippet (i have throttling logic elsewhere) --


llm = OpenAI(model="gpt-4-turbo", temperature=0.3)
embed_model = OpenAIEmbedding(model_name="text-embedding-3-small", embed_batch_size=42)

entities = Literal["PERSON", "TOPIC"]
relations = Literal["EXPERT_IN", "WORKING_ON", "WORKED_WITH", "KNOWS"]

schema = {
"TOPIC": ["EXPERT_IN, WORKING_ON"],
"PERSON": ["WORKED_WITH", "KNOWS", "EXPERT_IN, WORKING_ON"],
}

kg_extractor = SchemaLLMPathExtractor(
llm=llm,
possible_entities=entities,
possible_relations=relations,
kg_validation_schema=schema,
strict=True
)

the problem I am seeing is actually in the library. The kg extractor is hitting the API thousands of times for a single run and defaulting to asynchronous dumping... this makes no sense & I can understand why the problem seems common when using this library. Are there any workarounds? I think I've been security blocked by OpenAI api and can't get anything other than 429 errors
L
1 comment
@Nic the code uses semaphores to avoid dumping calls too fast.

I've actually not had 429 issues here

By default, 4 calls will be awaited at a given time
Attachments
image.png
image.png
Add a reply
Sign up and join the conversation on Discord