Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
Caching | π¦π Langchain
Caching | π¦π Langchain
Inactive
0
Follow
g
gyx119
9 months ago
Β·
Hi there! Is there an equivalent of langchain llm cache for llama index? (specifically i am interested in using azure cosmo db as the cache storage)
https://python.langchain.com/docs/modules/model_io/llms/llm_caching
W
1 comment
Share
Open in Discord
W
WhiteFang_Jr
9 months ago
Yes, you can take a look at caching here:
https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html#caching"Not
sure if Azure cosmo caching is available but Redis, Mongo etc are present if you want remote caching
Add a reply
Sign up and join the conversation on Discord
Join on Discord