also if tasks are well defined, you donβt usually need to know context as other tasks with similar results. you just need to perform the task.
@Logan M I noticed that in add_completed_task it always recreates a new index, looks inneficient?
also in get_next_task and add_new_tasks
Since it's only a list index, it's free to create them on the fly π
in auto_llama if it needs more than 2000 chars for memory it just uses the first memories instead of the latest ones. I guess its just a first implementation, but its not using llama-index for memory just a simple list. so actually auto llama currently does not use Llama indexes at all?
I actually haven't looked at the auto_llama code much hahaha I was busy writing the llama_agi folder
Yea it's just a first implementation. But rather than taking the first 2000chars, llama_ago uses the list index to write a summary of tasks completed
ah I thought you coded it π
yes agi is more complete. but the prompting in auto llama is more advanced. I think combining the 2 could be a good test
anyway the most important think for me is to understand my first questions above about the way baby agi embeds the objectives to get tasks for context
I guess its trying to find which tasks it did in the past that try to achieve the same objective, they call it context but its more to avoid doing things again. but not sure about the rationale
llama uses task+objective as output, but baby agi uses only task as output, and only objective as embed
anyway I'm planning to develop my own agent I'm just learning how to best store the memories, but these examples are pretty basic in that regard
I wish you guys got together to fully integrate the 2 projects to avoid duplicating efforts
π€·ββοΈ I see it as two different objectives, langchain is trying to do chat+everything, llama index is just really focused on being good at indexing data
We don't have that vector store yet. Would be a very cool contribution though! π
I think there is a lot of potential for using llama index to store memories for agents, especially if you organize the memory into a sort of graph structure using our graph/composable indexes πͺ
I think the main rationale is that tasks are like trees. From one task, many more grow under it, and so on
but its not a tree its just a vector store and its finding tasks with similar objectives and then retrieving them for context
I know, I meant just in terms of the nature of it haha. But that's why the previous task is used to generate new ones (I think)