Find answers from the community

Updated 2 months ago

I am trying to run the L2 lab from

I am trying to run the L2 lab from deeplearning.ai course in my local Jupyter notebook. I have the correct version of Python and have used the requirements.txt file that was in the first lesson for setup.

My OpenAi key connects fine to the gpt-3.5-turbo model via another test call.

When I run the following code, however, I get an error shown below:

from llama_index.llms.openai import OpenAI

llm = OpenAI(model="gpt-3.5-turbo")
response = llm.predict_and_call(
[add_tool, mystery_tool],
"Tell me the output of the mystery function on 2 and 9",
verbose=True
)

print(str(response))
The error is:

Retrying llama_index.llms.openai.base.OpenAI._chat in 0.974705252265967 seconds as it raised APIConnectionError: Connection errorā€¦

This occurs a few times and then the whole call errors out with a long trace.

This is as simple a call as I can think of and it still seems to fail. Any ideas?

Thanks,
L
J
7 comments
Try directly providing the api key?

OpenAI(..., api_key="...")

usually though if its an api key issue, it will say that in the error
Thanks. I tried that already. Did not work.
Do you have any more info? The error really should have something more helpful in it (i.e. rate limit, quota exceeded, api key issue, etc.)
Hi Logan, Thanks for sticking with this. There is clearly something very basic that I am getting wrong here.

I went to the https://docs.llamaindex.ai/en/stable/api_reference/llms/openai/ docs page and tried to just run the simple example there:

import os import openai os.environ["OPENAI_API_KEY"] = "sk-..." openai.api_key = os.environ["OPENAI_API_KEY"] from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-3.5-turbo") stream = llm.stream("Hi, write a short story") for r in stream: print(r.delta, end="")

(I confirmed that I can invoke the LLM correctly using a seoparate bit of code - so I know that OpenAi key is working for this selected model).

Even this error's out as follows:

---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[14], line 11
7 from llama_index.llms.openai import OpenAI
9 llm = OpenAI(model="gpt-3.5-turbo")
---> 11 stream = llm.stream("Hi, write a short story")
13 for r in stream:
14 print(r.delta, end="")

//Cut out for brevity//

File /opt/anaconda3/envs/testllamaindex/lib/python3.11/site-packages/llama_index/core/llms/llm.py:218, in LLM._log_template_data(self, prompt, prompt_args) 213 def _log_template_data( 214 self, prompt: BasePromptTemplate, prompt_args: Any
215 ) -> None:
216 template_vars = {
217 k: v
--> 218 for k, v in ChainMap(prompt.kwargs, prompt_args).items()
219 if k in prompt.template_vars
220 }
221 with self.callback_manager.event(
222 CBEventType.TEMPLATING,
223 payload={
(...)
228 },
229 ):
230 pass

AttributeError: 'str' object has no attribute 'kwargs'
It appears that there is an underlying mismatch with some lib
For what it is worth, I can reproduce the original APIConnection error with the following also:

from llama_index.llms.openai import OpenAI

resp = OpenAI().complete("Paul Graham is ")

Error dump is attached.
Ok, so I switched the code to use mistral and that worked. I then just created a new OpenAI key and the code started working. So weird that the old OpenAI key was working fine for another bit of test code. Anyhow all working. Thanks for your help.
Add a reply
Sign up and join the conversation on Discord