Find answers from the community

Updated 9 months ago

Im following the notebook only,

Im following the notebook only, currently using this -
base64str = base64.b64encode(response.content).decode("utf-8")

base64str2 = base64.b64encode(requests.get(image_url).content)


image_document = ImageDocument(image=base64str, image_mimetype="image/jpeg")
D
P
10 comments
so when you pass the base64 string it complains about image url not being supported?
what does the beginning of the base64 string look like
'iVBORw0KGgoAAAANSUhEUgAABEgAAAU4CAYAAAC8LuKjAAAMPmlDQ1BJQ0MgUHJv
are you using llamaindex's SimpleDirectoryLoader to get the image document(s)?

azure is likely complaining that you're not sending the right format of image data. the content of image_document
try image_documents and send the array returned from llamaindex's directory loader (it should handle whatever type their API is looking for)
i think in python it would be like:

Plain Text
image_documents = SimpleDirectoryReader("./your_img_directory/").load_data()

response = azure_openai_mm_llm.complete(
  prompt="Describe the images as an alternative text",
  image_documents=image_documents
)
here's where llamaindex formats the image data in their reader, so it actually should be the same as your base64 https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/readers/file/image_reader.py#L71
ahhh if it uses image_path in that case it is actually a path (URL) not the image data
I deployed a new LLM and its working πŸ˜›
Add a reply
Sign up and join the conversation on Discord