Find answers from the community

Updated 2 months ago

Hi all, I am having an issue today with

Hi all, I am having an issue today with the OpenAI Agent using gpt-4-1106-preview, while yesterday everything worked fine, it looks as if today when the tool responds (the results are coming correctly in JSON from the external API) the Agent has a shorter time out and doesn't process the response from the tool. Everything works ok when the external API responds faster (and with lesser data).
L
c
23 comments
You can manually set the timeout in the LLM object -- I think the default is 60s?
llm = OpenAI(...., timeout=60.0)
ah this is a good suggestion, thanks a lot
I will keep you posted
I'm surprised 60s isn't long enough for a single response, but I guess these newer models are probably a lot slower right now
I will try and let you know, 60.0 is a long time
nope, it doesn't work....
llm = OpenAI(model="gpt-4-1106-preview", temperature=0.0, timeout=60.0)
Is it timing out after 60s?
Or can you paste some of the error? Curious what's actually happening lol
yes very interesting
first query - the agent resets (or looks like) - no error but it asks me what I need
the second query the results are analyzed correctly
there are no errors
interestingly enough the agent on ChatGPT works for that same query
Attachment
image.png
Is this using OpenAIAssitant API? Or what kind of setup do you have?
super confused haha
so I have the same API that is used by a local agent created using Llama Index (OpenAI Agent) and ChatGPT
local agent is running on agent.wordlift.io
the response of the API, in some cases, might overflow GPT-3.5 but works with GPT-4
API takes 10s to respond
timeout=80.0 seems to work, I will try to move to Azure tomorrow and see if the response gets better, the futre is there now πŸ™‚
Add a reply
Sign up and join the conversation on Discord