Find answers from the community

Updated 3 months ago

Multi-agent guidelines with LlamaIndex for parallel agents with different LLMs and memory

At a glance

The community members are discussing the use of multi-agent guidelines with LlamaIndex, specifically regarding the ability to use agents in parallel with different language models (LLMs) and maintain memory in the loop, similar to CrewAI. A community member shared a video example that demonstrates the use of workflows in LlamaIndex, which they believe provides a nice interface and shows how to make customizations. However, another community member notes that the example runs agents sequentially, though they mention that tool calls can be run concurrently.

The discussion continues, with a community member expressing interest in seeing more examples of using various tools, such as custom functions, through the CrewAI and LlamaIndex integration, beyond what is covered in the documentation. Another community member suggests that understanding how workflows work in LlamaIndex can provide a lot of customization options compared to other high-level frameworks like CrewAI, though they admit to not having much experience with CrewAI themselves.

The community members then discuss some specific issues they are facing, such as the Gemini class not implementing the FunctionCallingLLM interface, and how to use different LLMs for each agent in the workflow. A community member provides a solution for the latter, suggesting that the LLM can be pulled from the selected agent configuration within the workflow.

There is
Useful resources
Hi guys,
Are there any multi-agent guidelines with LlamaIndex to use agents in parallel with different LLMs and the having memory in the loop?
Sth like CrewAI?
I read the CrewAI example with LlamaIndex but it is so simple and ignores many things
L
A
11 comments
Maybe this example would be interesting, I made a video on this

With workflows, you can kind of do whatever you want, but I feel like this exposes a nice interface, and also shows you how it works so that you can make tweaks

https://www.youtube.com/watch?v=wuuO04j4jPc
Materials from the video: https://github.com/run-llama/multi-agent-concierge/tree/main/video_tutorial_materials
I guess this runs agents sequentially though
But you can run tool calls concurrently easily enough
Thanks Logan!
I'll have a look at it.
Hello Logan,

I reviewed the example. While intriguing, it seems more complex than other LlamaIndex applications (at least for me!). Could you provide additional examples of using various tools, such as custom functions, through the CrewAI and LlamaIndex integration? I'm particularly interested in examples beyond those in the documentation.
I think it's worth understanding how it works -- workflows are new, so I don't blame anyone for not knowing how they work (maybe I should have assumed you hadn't used them before)
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#workflows

I think it's worth figuring out -- it will give you a ton of customization options compared to any other high level framework like crewai

I dont have an examples of crewai and llamaindex actually πŸ˜… I've never used it much myself. I found it good for demos but nothing much else
Thanks for the info and the link! I'm more curious now to check out those workflows again. It's cool if they allow me more customizations!πŸ€—
you're right! It's super cool after taking a closer look. Thanks for making something so awesome!
But, I'm still a bit stuck. Two things:
1) It works with OpenAI and Anthropic, but Gemini throws a "The llm should be functioncalling llm" error. What's the deal?
2) How can I use different LLMs for each agent?
The Gemini class does not implement FunctionCallingLLM, so it won't work (the Vertex class does though)
For different llms per agent, you could add that to the agent config. Then in the workflow, pull the llm from the selected config
Thanks for the response.
Add a reply
Sign up and join the conversation on Discord