Find answers from the community

s
F
Y
a
P
Updated 2 years ago

Iterative

Hey Everyone,
Hope someone can help me out here -
I have a long document (longer than 8K tokens) already splitted into chunks. I would like to ask complex questions on this document which require an iterative process by using an agent.
Would love to hear any recommendations on how to approach it and highly appreciate any code snippets as I couldn't find anything that might be relevant.
TIA!!
L
y
23 comments
What do you mean by iterative process?
It means that I would like to answer a complex question which needs to be broken down to several sub-tasks
for example, if I asked you who is the tallest American president of all times, you would first need to find a list of the American presidents, for each one find his height, and then among those get the tallest one
Would be interesting to know how to solve it for shorter documents than 8K and for longer ones (I guess for the longer ones I could use GPTListIndex but open for suggestions)
I guess my use case is similar to what's demonstrated here -
https://python.langchain.com/en/latest/modules/agents/agents/examples/react.html
just not sure how to use LlamaIndex for that (given also that my long text file is already splitted into chunks)
Llama index has query decomposition, but it seems like its only supported for graphs πŸ€”
actually I lied
Interesting, I didn't know this functionality. So this seems like an agent, only it cannot use any external tools such as searching the web, etc.
That would be good to start with, however I do prefer using an agent since it gives me the ability to add more tools such as calculator, searching the web, etc
Thanks a lot for the reference! Would love to hear if there's a way to implement this using an agent just to have more flexibility in the future
@Logan M When reading the debug info in the link, it seems like it uses "few shot training" in each prompt, but if I understand correct it only queries the LLM once, right? I was looking more for something that will query the LLM several times until it reaches its goal, like an agent
It will query the LLM multiple times. Notice that it generates many questions and answers them all, then returns a final answer.

You can use this inside langchain, just by using llama index as a custom tool
You're right, missed that πŸ™‚
By "custom tool" you mean I can use GPTListIndex as a tool?
Yea exactly!
Check out this example. The possibilities are endless ✨️ rather than a lambda, you can use a function to make it more customizable


https://github.com/jerryjliu/llama_index/blob/main/examples/langchain_demo/LangchainDemo.ipynb
Very cool, this seems the thing I was looking for!!!
Thanks πŸ™‚
@Logan M Hey Logan, I actually have a followup question regarding this.
Is there an AutoGPT capability where you can provide it with several goals I.e letting it read a text file with instructions, then apply those instructions on the given index (I.e ListIndex) to find insights, etc.
Not exactly yet! But that does sound pretty cool (and also not that hard to add to an auto-gpt-like system). I've actually be working on llama-agi too! https://github.com/run-llama/llama-lab/tree/main/llama_agi
Very cool @Logan M ! So what's left to develop to support the example I've described ? Is it just about supporting reading/ writing to files?
Yea either reading/writing to files (which will hopefully be migrated from auto_llama soon, another folder in that repo), or adding an existing index as tool πŸ‘

But I think the abstractions so far in the package make those really easy to add yourself too πŸ‘€
Amazing! Keep up the good work!
Add a reply
Sign up and join the conversation on Discord