Find answers from the community

Updated 10 months ago

I can't believe I'm asking this, but I

I can't believe I'm asking this, but I have no idea now how to call the open ai api any longer
T
t
W
11 comments
Have you tried this?

Plain Text
from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI

from llama_index.llms.openai import OpenAI

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="What is your name"),
]
resp = OpenAI().chat(messages)

print(resp)
will give it a try
why did they change this, I wonder
OpenAI? It should work though
it's annoying, bro
everyone has to go back and fix their code. like, why
@Teemu so this is what I have reasoned:

in case anyone has to deal with this in the near term:



  1. “import openai”
this has to be commented out.

and replaced with this:

"from openai import OpenAI
client = OpenAI()”


the way we call the openai API has changed from this:


# Call the GPT model
response = openai.ChatCompletion.create(
model='gpt-4-1106-preview',
messages=[{"role": "system", "content": prompt}]
)


to this:



response = client.chat.completions.create(
model='gpt-4-1106-preview',
messages=[{"role": "system", "content": prompt}]
)


  1. the way we process the output has changed from this:
model_output = response['choices'][0]['message']['content']


to this
model_output = response.choices[0].message.content
@Teemu so this is what I have reasoned:

in case anyone has to deal with this in the near term:



1. “import openai” this has to be commented out. and replaced with this: "from openai import OpenAI
client = OpenAI()” the way we call the openai API has changed from this: # Call the GPT model response = openai.ChatCompletion.create(
model='gpt-4-1106-preview',
messages=[{"role": "system", "content": prompt}]
) to this: response = client.chat.completions.create(
model='gpt-4-1106-preview',
messages=[{"role": "system", "content": prompt}]
) 3. the way we process the output has changed from this: model_output = response['choices'][0]['message']['content']
to this model_output = response.choices[0].message.content``
OpenAI updated its library which changed the way OpenAI API calling used to happen. i.e openai.ChatCompletion.create got changed to client.chat.completions.create


As @Teemu mentioned, You can use this if you want to interact with OpenAI
Add a reply
Sign up and join the conversation on Discord