Find answers from the community

Updated last month

Multi-agent guidelines with LlamaIndex for parallel agents with different LLMs and memory

Hi guys,
Are there any multi-agent guidelines with LlamaIndex to use agents in parallel with different LLMs and the having memory in the loop?
Sth like CrewAI?
I read the CrewAI example with LlamaIndex but it is so simple and ignores many things
L
A
11 comments
Maybe this example would be interesting, I made a video on this

With workflows, you can kind of do whatever you want, but I feel like this exposes a nice interface, and also shows you how it works so that you can make tweaks

https://www.youtube.com/watch?v=wuuO04j4jPc
Materials from the video: https://github.com/run-llama/multi-agent-concierge/tree/main/video_tutorial_materials
I guess this runs agents sequentially though
But you can run tool calls concurrently easily enough
Thanks Logan!
I'll have a look at it.
Hello Logan,

I reviewed the example. While intriguing, it seems more complex than other LlamaIndex applications (at least for me!). Could you provide additional examples of using various tools, such as custom functions, through the CrewAI and LlamaIndex integration? I'm particularly interested in examples beyond those in the documentation.
I think it's worth understanding how it works -- workflows are new, so I don't blame anyone for not knowing how they work (maybe I should have assumed you hadn't used them before)
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#workflows

I think it's worth figuring out -- it will give you a ton of customization options compared to any other high level framework like crewai

I dont have an examples of crewai and llamaindex actually πŸ˜… I've never used it much myself. I found it good for demos but nothing much else
Thanks for the info and the link! I'm more curious now to check out those workflows again. It's cool if they allow me more customizations!πŸ€—
you're right! It's super cool after taking a closer look. Thanks for making something so awesome!
But, I'm still a bit stuck. Two things:
1) It works with OpenAI and Anthropic, but Gemini throws a "The llm should be functioncalling llm" error. What's the deal?
2) How can I use different LLMs for each agent?
The Gemini class does not implement FunctionCallingLLM, so it won't work (the Vertex class does though)
For different llms per agent, you could add that to the agent config. Then in the workflow, pull the llm from the selected config
Thanks for the response.
Add a reply
Sign up and join the conversation on Discord