Find answers from the community

s
F
Y
a
P
Updated last month

Functions that I would like to wrap into

Functions that I would like to wrap into FunctionTools tend to exceed the context window of my locally-running LLMs. (I'm looking at you, Wikipedia tool.)

While I can wrap each tool with a "please summarize this" decorator function individually, I wonder if LlamaIndex has already a mechanism for bulk-wrapping all tools for a given (ReAct)Agent?
L
V
12 comments
nothing built in πŸ‘€ I'm curious why the overflow? Too many tools? Tools have descriptions that are too long?
I meant the response of the tools. e.g., in https://llamahub.ai/l/tools-wikipedia:
load_data: Loads a page from wikipedia
This is a heavy offender.
ah right! For that, there is an "on demand loader tool"
that creates a vector index on the fly
and searches it
But it gets the point across
That's smart! Thanks for the pointer! (LOL it even uses Wikipedia loader already.)
My initial idea was to append a TreeSummarizer to the output of the Wikipedia tool. IIUC, TreeSummarizer will retain information from broader raw output than OnDemandLoaderTool, because TreeSummarizer aggregates information layer-by-layer, while OnDemandLoaderTool selects only the top-k.
Just want to check if I'm making sense?
Too many tools?
On a separate topic, I'm intrigued by this issue you brought up. What would be LlamaIndex's solution to "so many tools that the system prompt had taken >80% of the context window" problem? My guess is a router?
yea we have a concept of tool retrieval. So retrieve the top-k tools to answer each query. Not perfect but certainly helps πŸ™‚
Yea that makes sense. Tree summarize will just be slower to run vs. top-k search πŸ‘
Add a reply
Sign up and join the conversation on Discord