Find answers from the community

Updated last year

Hello, Trying correctness evaluator and

At a glance
Hello, Trying correctness evaluator and hitting OpenAI API timeout errors. Anyone else has run into this? thanks for taking a look.

Plain Text
site-packages/llama_index/evaluation/correctness.py", line 134, in aevaluate
    eval_response = await self._service_context.llm.apredict(
....
site-packages/openai/_base_client.py", line 1442, in _request
    raise APITimeoutError(request=request) from err
T
J
L
7 comments
Seems like a generic timeout, are you still having issues with it? I just tried it and seemed to work fine for me
Yes. I am trying to run evaluation on a bunch of queries back-to-back. Trying the following options:

  1. BatchEvalRunner's evaluate_queries, but this throws
Plain Text
    raise NotImplementedError(
NotImplementedError: Async selection not supported for Pydantic Selectors

(the query engine needs the pydantic selector..)

  1. Instead of BatchEvalRunner, trying a custom for loop -- in which case, the first one seems to go through but the subsequent one goes on above timeout error.
Any thoughts or other options available here? thanks
@Teemu @Logan M @WhiteFang_Jr still running into this -- any thoughts?
Not sure entirely, but I know you can increase the timeout on the openai llm class. Also maybe double check you have the latest openai client version.
Weird, we should probably implement async selectors? Very weird that it's not tbh
Update: Looks like its always the second call to correctness_evaluator.evaluate() that goes through the timeout in my for loop. Is there some logic that could cause this behavior for the second time and not the first time from a regular for loop?

Definitely, async selectors would help to use the BatchEvalRunner in this case.
Add a reply
Sign up and join the conversation on Discord