Find answers from the community

Updated 3 months ago

Caching | πŸ¦œπŸ”— Langchain

Hi there! Is there an equivalent of langchain llm cache for llama index? (specifically i am interested in using azure cosmo db as the cache storage) https://python.langchain.com/docs/modules/model_io/llms/llm_caching
W
1 comment
Yes, you can take a look at caching here: https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html#caching"Not sure if Azure cosmo caching is available but Redis, Mongo etc are present if you want remote caching
Add a reply
Sign up and join the conversation on Discord