updating handoff solved the problem..
from llama_index.core.prompts import PromptTemplate
# New custom handoff prompt that instructs the LLM to output a structured function call
CUSTOM_HANDOFF_PROMPT = """You are operating as a multi-agent assistant.
If you determine that another agent is better suited to handle the user's request,
do not output plain text. Instead, output a JSON object exactly in the following format:
{"tool": "handoff", "to_agent": "<target_agent>", "reason": "<explanation>"}
Replace <target_agent> and <explanation> with the appropriate values.
If you are not handing off, simply provide your answer in normal text.
Currently available agents:
{agent_info}
"""
)