Log in
Log into community
Find answers from the community
View all posts
Get notified of new replies
Subscribe for updates
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 2 months ago
0
Follow
llama_parse/examples/multimodal/multimod...
llama_parse/examples/multimodal/multimod...
Inactive
0
Follow
f
farzzy528
2 months ago
Β·
I'm following this notebook but using Azure OpenAI as my Settings.llm isntead of OpenAI - currently Azure OpenAI's to my knowledge does NOT support prompt caching. How do I know if it's working here?
https://github.com/run-llama/llama_parse/blob/main/examples/multimodal/multimodal_contextual_retrieval_rag.ipynb?__s=xhybsffemodt4wd1rcub&utm_source=drip&utm_medium=email&utm_campaign=LlamaIndex+Newsletter+2024-10-08
Add a reply
Sign up and join the conversation on Discord
Join on Discord