Find answers from the community

Updated 4 months ago

Gpt4

At a glance

A community member is having an issue when trying to use the OpenAI model in their code, as they are getting an error that it is a chat model and not supported in the v1/completions endpoint. They are wondering if they should be using v1/chat/completions instead. Another community member asks if the OpenAI instance was imported from llamaindex or langchain, but there is no explicitly marked answer to the original issue.

Hey y'all, when trying to use
Plain Text
service_context= ServiceContext.from_defaults(prompt_helper=prompt_helper, node_parser=node_parser, chunk_size=1024, llm=OpenAI(
            temperature=0.0, model="gpt-4", max_tokens=output_tokens))

while loading a Weaviate vector store I get this is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

am I missing something?
L
1 comment
Did you import OpenAI from llamaindex or langchain?
Add a reply
Sign up and join the conversation on Discord