Find answers from the community

Updated 21 hours ago

I am attempting to use the multiagent workflow but it flakes out and does not hand off to

@kapa.ai I am attempting to use the multiagent workflow but it flakes out and does not hand off to all the agents as it is supposed to
k
c
L
25 comments
Please stand by @cmosguy as I search the knowledge base. I normally take about 15 seconds or so. :wave:
----------------------
Based on the knowledge sources provided, I can offer some insights into the issue you're experiencing with the multi-agent workflow in LlamaIndex. It appears that there has been a recent fix related to this exact problem.

According to the LlamaIndex Changelog, in version 0.12.14 of llama-index-core, there was a fix implemented for "agentworkflow handoffs for non-openai llms" (issue #17631). This suggests that there was a known issue with agent handoffs in the multi-agent workflow, particularly when using LLMs other than OpenAI's.

If you're experiencing problems with agents not handing off control as expected, I recommend ensuring that you're using the latest version of LlamaIndex, particularly version 0.12.14 or later of llama-index-core.

Additionally, when setting up your multi-agent workflow, make sure you've properly defined the can_handoff_to parameter for each agent. For example:

Plain Text
research_agent = FunctionAgent(
    name="ResearchAgent",
    description="Useful for searching the web for information on a given topic and recording notes on the topic.",
    system_prompt=(
        "You are the ResearchAgent that can search the web for information on a given topic and record notes on the topic. "
        "Once notes are recorded and you are satisfied, you should hand off control to the WriteAgent to write a report on the topic."
    ),
    llm=llm,
    tools=[search_web, record_notes],
    can_handoff_to=["WriteAgent"],
)


Make sure that the can_handoff_to lists for each agent are correctly set up to allow the desired handoffs between agents.
If you're still experiencing issues after updating and verifying your setup, you may want to check the LlamaIndex documentation for any additional troubleshooting steps or report the issue to the LlamaIndex GitHub repository.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai is there something I need to place in the description? Or should I be looking at some state or context status that is emitted in the tool functions?
Let me check the knowledge base for you again. One moment! :books:
----------------------
Based on the knowledge sources provided, I can offer some insights into your question about the agent descriptions and tool functions in the LlamaIndex multi-agent workflow.

Regarding the description field, it is indeed important, but it's not the only factor in ensuring proper handoffs between agents. The description is used by other agents to decide who to hand off control to next. Here's an example of how a description is used in the FunctionAgent setup:

Plain Text
write_agent = FunctionAgent(
    name="WriteAgent",
    description="Useful for writing a report on a given topic.",
    system_prompt=(
        "You are the WriteAgent that can write a report on a given topic. "
        "Your report should be in a markdown format. The content should be grounded in the research notes. "
        "Once the report is written, you should get feedback at least once from the ReviewAgent."
    ),
    llm=llm,
    tools=[write_report],
    can_handoff_to=["ReviewAgent", "ResearchAgent"],
)


As for looking at state or context status emitted in the tool functions, this is indeed a crucial part of the process. The tool functions you've defined (like record_notes, write_report, and review_report) interact with the context and state of the workflow. For example:

Plain Text
async def record_notes(ctx: Context, notes: str, notes_title: str) -> str:
    """Useful for recording notes on a given topic. Your input should be notes with a title to save the notes under."""
    current_state = await ctx.get("state")
    if "research_notes" not in current_state:
        current_state["research_notes"] = {}
    current_state["research_notes"][notes_title] = notes
    await ctx.set("state", current_state)
    return "Notes recorded."
This function updates the state with new notes and returns a confirmation message. The state is then accessible to other agents and tools in the workflow.

To debug and understand the flow of your multi-agent system, you can use the streaming events feature of AgentWorkflow. This allows you to see what's happening under the hood as the agents work and hand off control. Here's an example of how you can set this up:

Plain Text
handler = agent_workflow.run(user_msg="Your task description here")

async for event in handler.stream_events():
    if isinstance(event, AgentOutput):
        if event.response.content:
            print("📤 Output:", event.response.content)
        if event.tool_calls:
            print(
                "🛠️  Planning to use tools:",
                [call.tool_name for call in event.tool_calls],
            )
    elif isinstance(event, ToolCallResult):
        print(f"🔧 Tool Result ({event.tool_name}):")
        print(f"  Arguments: {event.tool_kwargs}")
        print(f"  Output: {event.tool_output}")
    elif isinstance(event, ToolCall):
        print(f"🔨 Calling Tool: {event.tool_name}")
        print(f"  With arguments: {event.tool_kwargs}")


This will give you detailed information about which agent is active, what tools are being called, and what the results of those tool calls are. This can help you identify where the workflow might be breaking down or failing to hand off correctly.

If you're still experiencing issues after reviewing these aspects, you may want to double-check that your AgentWorkflow is set up correctly, especially the can_handoff_to parameters for each agent, as mentioned in the previous answer.
LlamaIndex Documentation provides more detailed information on setting up and troubleshooting multi-agent systems with AgentWorkflow.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how can I debug the handoff to in the event stream and understand why it is not being handed off to a specific agent?
Let me check the knowledge base for you again. One moment! :books:
----------------------
To debug the handoff process in the event stream and understand why it's not being handed off to a specific agent, you can leverage the observability features provided by LlamaIndex. Here's how you can approach this:

  1. Use the LlamaDebugHandler:
    LlamaIndex provides a debug handler that can help you track and trace the inner workings of the library. You can set it up like this:
Plain Text
   import llama_index.core
   
   llama_index.core.set_global_handler("simple")
   


This will enable basic logging and tracing of events, including handoffs between agents.

  1. Implement detailed event streaming:
    You can expand on the event streaming example to include more detailed information about handoffs. Here's an enhanced version of the event streaming code:
```python
from llama_index.core.agent.workflow import (
AgentInput, AgentOutput, ToolCall, ToolCallResult, AgentStream
)

handler = agent_workflow.run(user_msg="Your task description here")
current_agent = None
async for event in handler.stream_events():
if hasattr(event, "current_agent_name") and event.current_agent_name != current_agent:
current_agent = event.current_agent_name
print(f"\n{'='50}") print(f"🤖 Agent: {current_agent}") print(f"{'='50}\n")

if isinstance(event, AgentOutput):
if event.response.content:
print("📤 Output:", event.response.content)
if event.tool_calls:
print("🛠️ Planning to use tools:", [call.tool_name for call in event.tool_calls])
for call in event.tool_calls:
if call.tool_name == "handoff":
print(f"🔄 Attempting handoff to: {call.tool_kwargs.get('to_agent')}")
print(f" Reason: {call.tool_kwargs.get('reason')}")
elif isinstance(event, ToolCallResult):
print(f"🔧 Tool Result ({event.tool_name}):")
print(f" Arguments: {event.tool_kwargs}")
print(f" Output: {event.tool_output}")
if event.tool_name == "handoff":
print(f"🔄 Handoff result: {event.tool_output}")
elif isinstance(event, ToolCall):
print(f"🔨 Calling Tool: {event.tool_name}")
print(f" With arguments: {event.tool_kwargs}")
```

This enhanced version will provide more detailed information about handoff attempts and their results.

  1. Check the state and context:
    You can inspect the state and context at various points in the workflow to understand why a handoff might not be occurring. Add this to your event stream processing:
Plain Text
   if isinstance(event, AgentOutput):
       state = await handler.ctx.get("state")
       print("Current state:", state)
       current_agent_name = await handler.ctx.get("current_agent_name")
       print("Current agent:", current_agent_name)
   


  1. Use the LlamaDebugHandler for more detailed tracing:
    You can create a custom debug handler to get more detailed information:
Plain Text
   from llama_index.core.callbacks import LlamaDebugHandler, CallbackManager
   
   debug_handler = LlamaDebugHandler(print_trace_on_end=True)
   callback_manager = CallbackManager([debug_handler])
   
   agent_workflow = AgentWorkflow(
       agents=[research_agent, write_agent, review_agent],
       root_agent=research_agent.name,
       initial_state={
           "research_notes": {},
           "report_content": "Not written yet.",
           "review": "Review required.",
       },
       callback_manager=callback_manager
   )
   


This will provide a detailed trace of events, including handoffs, at the end of the workflow execution.

  1. Check the can_handoff_to configurations:
    Make sure that the can_handoff_to lists for each agent are correctly set up. For example:
Plain Text
   print("ResearchAgent can handoff to:", research_agent.can_handoff_to)
   print("WriteAgent can handoff to:", write_agent.can_handoff_to)
   print("ReviewAgent can handoff to:", review_agent.can_handoff_to)
   


Ensure that the agent you expect to receive the handoff is in the can_handoff_to list of the current agent.

By implementing these debugging techniques, you should be able to trace the handoff process, understand why a specific handoff might not be occurring, and identify any issues in your multi-agent workflow configuration.
LlamaIndex Tracing and Debugging Documentation provides more information on debugging techniques, including using third-party tools for visualization and debugging.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@Logan M I am running into a situation where I am using the multi agent workflow. In my case I am generating some code and I want to run the code in the interpreter. I have a tool and agent that runs the code. However, in the multi agent workflow it is not executing the step to run the agent. how do I get the workflow to force to always run a step or agent?
I'm not sure what you mean. You can't force. If you need more control or a more specific workflow, put it all into one tool, or write your own workflow and ditch the AgentWorkflow
yeah i guess the writing is on the wall - i have to write my own workflow
i was thinking about putting into one tool
let me think more
At the end of the day, writing your own is some work, but it will give you way more control over what happens
@Logan M do you have an example where a @step will call an agent? I have an agent that has to review the code for syntax issues so was wondering how to do that
You can do anything inside @step 👀 So it won't be any different than any other step right?
It’s just that I need to make a cycle of review and check and if it does not pass the check to rewrite the code better which was being done in that multi agent approach - it’s just flaky
That's pretty straightforward to do in a workflow, just bouncing between two steps until an LLM outputs something that signals that its done. I usually do something like roughly "Output a review of the code. If its good enough, write "DONE!" in your response" -- then its easy to tell when the loop is done
ok let me figure this out.... the multi agent example works like 50% of the time , so now i am going in and trying to write these steps out in a coherent way.
Add a reply
Sign up and join the conversation on Discord