Find answers from the community

s
F
Y
a
P
Updated last month

Gpt4

Can anyone share a working example of using GPT-4? I tried using it and I keep getting an error even though I was granted access to it.
I also regenerated the API key.

"openai.error.InvalidRequestError: The model: gpt-4 does not exist"
L
y
11 comments
Did you update your openai package? Did you use the ChatOpenAI class from langchain for gpt4?
(There are notebook examples in the llama index repo with gpt4, so it should work πŸ˜…)
Yes for both πŸ™‚
@Logan M I'm actually trying to use it directly from Langchain, without an underlying index
Hmm I'm not sure what the problem is then πŸ€” I would double check that you ran pip install --upgrade langchain openai and double check the token.

Otherwise, it seems like an issue on openai's end πŸ‘€
Thanks, will try that again πŸ™‚
@Logan M It didn't work directly through Langchain but I thought I should maybe try it using the LLMPredictor wrapper. However, now I get an error even for gpt3.5 model. Any idea what am I doing wrong ?

llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")) query = "What's Daniel's address?" response = llm_predictor.predict(Prompt(query))

And I get the error:

AttributeError: 'Prompt' object has no attribute 'input_variables'
Ohhh, i think the input to the llm_predictor needs a few special things, it's not intended to be used directly πŸ˜…
I read in the docs it has a "predict" function, so I thought I can use it directly
The error I'm getting is for the last line
Yea, you need to use a properly initialized prompt object, like in the repo in gpt_index/prompts/prompts.py

I think the docs just show all our object apis, instead of the actual user-facing ones πŸ™ƒ
Add a reply
Sign up and join the conversation on Discord