service_context= ServiceContext.from_defaults(prompt_helper=prompt_helper, node_parser=node_parser, chunk_size=1024, llm=OpenAI( temperature=0.0, model="gpt-4", max_tokens=output_tokens))
this is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?