To develop a multi-agent query pipeline using llamaindex workflows approach
To develop a multi-agent query pipeline using llamaindex workflows approach
At a glance
The community members are discussing the use of the LlamaIndex workflow approach to develop a multi-agent query pipeline. One community member has created a demo and shared the materials, and the discussion focuses on the design choices, such as the orchestrator agent's role and the flexibility of customizing the agents. The community members also explore the possibility of having agents that rely solely on the language model without any tools.
Great demo! I have a quick question. In this example, when the orchestrator agent designates a speaking agent, if that speaking agent cannot fully resolve the issue, does it directly pass the task to another agent, or does it first return to the orchestrator agent?
Its designed that it can only request a transfer, which gives it to the orchestrator
The thought behind this design is, agent performance will suffer if there are too many tools/options. If every agent had every other agent as a tool, they probably wouldn't function very well (or at least, the system would not scale to many agents)
The orchestrator is the only agent with details on every other agent
Understood, I think this concept is similar to the multi-agent framework:Autogen, which also has an orchestrator mechanism called a manager. I’d like to ask further: why didn’t you use LlamaIndex’s OpenAIAgent or ReactAgent? Was it to reduce abstraction and increase customization flexibility?
I’d like to ask further: In this example, it seems that each agent needs to have at least one tool, right? Is it possible to modify it so that some agents don’t need tools and rely solely on the LLM?