Find answers from the community

Updated 2 years ago

agents task indexes

Hi I need to read the code for the llama agents, but conceptually why is there the need to store tasks in an index?

I was reviewing the babyagi original source-code and the langchain implementation. I am trying to understand how and why the agent context works.
both implementations embed the task execution results, but the original also stores the result itself in the metadata (never uses it though)
the query for the execution agent context is the objective.

anyone knows the rationale behind using only the results to find similar tasks as context for the current task?
T
L
18 comments
also if tasks are well defined, you don’t usually need to know context as other tasks with similar results. you just need to perform the task.
@Logan M I noticed that in add_completed_task it always recreates a new index, looks inneficient?
also in get_next_task and add_new_tasks
Since it's only a list index, it's free to create them on the fly πŸ‘Œ
in auto_llama if it needs more than 2000 chars for memory it just uses the first memories instead of the latest ones. I guess its just a first implementation, but its not using llama-index for memory just a simple list. so actually auto llama currently does not use Llama indexes at all?
I actually haven't looked at the auto_llama code much hahaha I was busy writing the llama_agi folder

Yea it's just a first implementation. But rather than taking the first 2000chars, llama_ago uses the list index to write a summary of tasks completed
ah I thought you coded it πŸ™‚
yes agi is more complete. but the prompting in auto llama is more advanced. I think combining the 2 could be a good test
anyway the most important think for me is to understand my first questions above about the way baby agi embeds the objectives to get tasks for context
I guess its trying to find which tasks it did in the past that try to achieve the same objective, they call it context but its more to avoid doing things again. but not sure about the rationale
llama uses task+objective as output, but baby agi uses only task as output, and only objective as embed
anyway I'm planning to develop my own agent I'm just learning how to best store the memories, but these examples are pretty basic in that regard
langchain introduced the time weighted vectorstore, from the generative agents paper. do you have something similar in llama?

https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
I wish you guys got together to fully integrate the 2 projects to avoid duplicating efforts
πŸ€·β€β™‚οΈ I see it as two different objectives, langchain is trying to do chat+everything, llama index is just really focused on being good at indexing data

We don't have that vector store yet. Would be a very cool contribution though! 😎

I think there is a lot of potential for using llama index to store memories for agents, especially if you organize the memory into a sort of graph structure using our graph/composable indexes πŸ’ͺ
I think the main rationale is that tasks are like trees. From one task, many more grow under it, and so on
but its not a tree its just a vector store and its finding tasks with similar objectives and then retrieving them for context
I know, I meant just in terms of the nature of it haha. But that's why the previous task is used to generate new ones (I think)
Add a reply
Sign up and join the conversation on Discord