Find answers from the community

Updated 5 months ago

llama_parse/examples/multimodal/multimod...

At a glance

The community member is following a notebook but using Azure OpenAI's Settings.llm instead of OpenAI, and they note that Azure OpenAI currently does not support prompt caching. The community member is asking how to know if the code is working in this case.

Useful resources
I'm following this notebook but using Azure OpenAI as my Settings.llm isntead of OpenAI - currently Azure OpenAI's to my knowledge does NOT support prompt caching. How do I know if it's working here?
https://github.com/run-llama/llama_parse/blob/main/examples/multimodal/multimodal_contextual_retrieval_rag.ipynb?__s=xhybsffemodt4wd1rcub&utm_source=drip&utm_medium=email&utm_campaign=LlamaIndex+Newsletter+2024-10-08
Add a reply
Sign up and join the conversation on Discord