Find answers from the community

g
gyx119
Offline, last seen 4 months ago
Joined September 25, 2024
Hi there! Is there an equivalent of langchain llm cache for llama index? (specifically i am interested in using azure cosmo db as the cache storage) https://python.langchain.com/docs/modules/model_io/llms/llm_caching
1 comment
W
Hi I've been experimenting with query pipeline and found it super useful. I am converting my query pipeline to tools and feed them into my ReAct agent.

I understand the output of a query pipeline has to be an AgentResponse and will go into the prompt. If i have 1 query pipeline for getting data (e.g. a big pandas dataframe with 100k rows) and another query pipeline of code interpreter for analyzing the data, how do i pass the dataframe to another query pipeline for analysis?

The reason i did not combine these 2 query pipelines is that i want to have the react agent reuse the code interpreter for other things.
4 comments
L
g