Find answers from the community

Updated 10 months ago

Hi guys, I am trying to run this example

At a glance

A community member is trying to run an example notebook from the LlamaIndex documentation, but is encountering an error when trying to use the azure_openai_mm_llm model to complete a prompt with an image. The error message indicates that the "image_url is only supported by certain models". Other community members suggest that the issue may be related to the API version, as the example uses "2023-12-01-preview" while the latest version is "2024-02-15-preview". They wonder if Azure has updated their API, and suggest trying to deploy the model with the same API version and model name as the example. However, one community member mentions that they do not use Azure, so there is no explicitly marked answer to the issue.

Useful resources
Hi guys, I am trying to run this example notebook - https://docs.llamaindex.ai/en/stable/examples/multi_modal/azure_openai_multi_modal/?h=azureopenaimultimodal
while trying to run the model to respond, e.g :
complete_response = azure_openai_mm_llm.complete(
prompt="Describe the images as an alternative text",
image_documents=[image_document],
)
getting this error - BadRequestError: Error code: 400 - {'error': {'message': 'Invalid content type. image_url is only supported by certain models.', 'type': 'invalid_request_error', 'param': 'messages.[0].content.[1].type', 'code': None}}
how to resolve this?
L
P
8 comments
Did you use the same LLM as the notebook?

Plain Text
 azure_openai_mm_llm = AzureOpenAIMultiModal(
    engine="gpt-4-vision-preview",
    api_version="2023-12-01-preview",
    model="gpt-4-vision-preview",
    max_new_tokens=300,
)
yeah, just changed the required fields
api_version is 2024-02-15-preview
I wonder if azure updated their API in that version
what can I do now?
Deploy your model with the same API verison/model name as the example? Not sure
I don't use azure lol
Add a reply
Sign up and join the conversation on Discord