Find answers from the community

Updated 2 months ago

llama_parse/examples/multimodal/multimod...

I'm following this notebook but using Azure OpenAI as my Settings.llm isntead of OpenAI - currently Azure OpenAI's to my knowledge does NOT support prompt caching. How do I know if it's working here?
https://github.com/run-llama/llama_parse/blob/main/examples/multimodal/multimodal_contextual_retrieval_rag.ipynb?__s=xhybsffemodt4wd1rcub&utm_source=drip&utm_medium=email&utm_campaign=LlamaIndex+Newsletter+2024-10-08
Add a reply
Sign up and join the conversation on Discord