Find answers from the community

Updated last week

Workflow

I was testning a multi-agent workflows but unfortunately i was running to an issue
Plain Text
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_yLipsAco1KnF3jvwLCsUZQKc", 'type': 'invalid_request_error', 'param': 'messages.[4].role', 'code': None}}

code is as below.

Plain Text
embed_agent = FunctionAgent(
    name="embeddings creator",
    description="Performs embeddings creation task",
    system_prompt="You are a assistant to create embeddings for the given data",
    tools=[
        FunctionTool.from_defaults(fn=create_embeddings),
    ],
    llm=llm,
    can_handoff_to=["data ingestor"]
)

ingest_agent = FunctionAgent(
    name="data ingestor",
    description="Performs data ingestion task",
    system_prompt="You are a assistant to ingest data / embeddings into vector database",
    tools=[
        FunctionTool.from_defaults(fn=ingest_to_vec_db),
    ],
    llm=llm
)


trigger:
Plain Text
async def main():
    # Create and run the workflow
    workflow = AgentWorkflow(
        agents=[embed_agent, ingest_agent], root_agent="embeddings creator"
    )

    await workflow.run(user_msg="embed the data and ingest it to vector database")


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())


we hve the functions defined and using the LLM that has function calling capability. can some one please help me understand this.
L
p
8 comments
Going to need way more info πŸ˜…

what version of llama-index-core? I just put out a version today, 0.12.14, I would make sure you update

What llm?

It would also help if you printed the stream events, because they id know what the workflow is doing and where it stopped working
llm = gpt-4o

data orchestration

llama-index==0.12.14
llama-index-llms-ollama==0.5.0
llama-index-llms-openai==0.3.14
llama-index-llms-anthropic==0.6.4
llama-index-embeddings-huggingface==0.5.1
llama-index-embeddings-ollama==0.5.0
llama-index-vector-stores-qdrant==0.4.3

environment

python-dotenv

observability

openlit
after i upgrade to latest version, i dont see any error but my second agent in the sequence is not getting called
Plain Text
downloading the llamaindex/vdr-2b-multi-v1 model from huggingface
You try to use a model that was created with version 3.3.0, however, your version is 3.2.1. This might cause unexpected behavior or errors. In that case, try to update to the latest version.



The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
creating document structure
Generated documents structure:
[
    {
        "architecture_description": "This image depicts a hybrid search architecture using both dense and sparse embedding models integrated with Qdrant for vector search. It includes an orchestrator and message queue (RabbitMQ) to handle user queries, retrieve context, and process responses via tool agents.",
        "architecture_image": "images/image-1.png"
    },
 .....
]

Process finished with exit code 0

according to my logic i made a hand_off call to ingestion-agent
Its available to handoff to ingestion-agent, but it can also chose to do nothing and end the loop
This is why you are able to have a back-and-forth chat with thes system
If you want it to be one shot, you'll need better prompt engineering, or a smarter llm
ummmm, thanks let me checkit out
Add a reply
Sign up and join the conversation on Discord