Hi guys, https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/ based on this doc, the multi agent handoff is mor explicit and sequential. My question is does llamaindex have more implicit and parallelization approach for this agent to implement or any design restriction. Any suggestions
@Logan M thank you for your response. Based on doc wise, if I wanna make parallelization, do you have an example code to follow to understand how it works?
Also, currently agent can't react and execute any task, it's just work as simple chat Q&A type, if I want to do some automation with agents like it to create auto calendar events for me and write email automatically, what way I can do? Any suggestions on these two things