The community member is interested in the new LlamaIndex Workflow abstraction, which requires familiarity with LlamaIndex's lower-level abstractions. They wonder if they would need to write nested workflows when using multiple RAG tools, and whether the LlamaIndex team has a specific use case in mind for the Workflow abstraction.
Other community members respond that the Workflow abstraction allows using any level of abstraction, and that existing agent classes can be used within a Workflow. The main use case is to provide a structured way to chain logic together, which the LlamaIndex team plans to integrate into their llama-agents repository. Some community members express dislike for the LangGraph abstraction, which the Workflow abstraction aims to improve upon.
Has anyone tried the new LlamaIndex Workflow abstraction?
I find it quite interesting because using the Workflow abstraction requires for developers to be quite familiar with LlamaIndex's lower level abstractions (for e.g. llm.get_tool_calls_from_response(), manually load in chat conversations into memory, etc). Most of us will probably be aware of the RAG abstractions but not the agent ones because it's always just agent.chat(query). The ReAct agent example underscores the biggest difference: ReActAgent.from_tools() is just one line vs writing an entire ReAct agent workflow. If I were to write an agent using multiple RAG tools, would I have to write nested workflows?
I've not had the need to build agents from low level abstractions yet. Does the LlamaIndex team have a use case in mind when designing the "Workflow" abstraction?
You can use any level of abstraction you want inside of it
The examples just implement lower level logic to make it more interesting (since people view these modules as black boxes most of the time, and have no idea what they do under the hood or how to customize them)