Find answers from the community

Updated 3 weeks ago

Fixing Issues with Creating Multi-agent Using Context as Document

Hi guys, can anyone help me with fixing this issues [https://github.com/run-llama/llama_index/issues/17653] I tried all appraoches but still getting the same issues. I am trying to create multi agent using Context as document [https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_multi/]
L
J
43 comments
I left another comment on your issue. Pretty sure you shouldn't be using asyncio.run() in a server, define a normal async API instead
I did not used it in server code, just used in routes code to generate response for me
llama-index-core>=0.12.0
llama-index>=0.12.10
I also updated the version
Did you try the code I gave on the issue though?
If you are still having problems after trying that code, it would be great if I could reproduce locally with some example
I assumed routes was part of flask
I actually did this

Plain Text
def run_in_loop(coro):
    loop = query_service._get_or_create_loop()
    return loop.run_until_complete(coro)

then in response not using the .run as you said
I able to resolved this
Can I ask one thing, I am using Context, so does context is short term memory management for the runtime in the chat using agent? or if I want long term, how can i store it?
I am trying to use GitHub repo tools, and using FunctionTool to agent reply, and trying to use Context. Context import is fine, but agentworkflow is not working may be, its white
You might need the latest version of llama-index-core here: pip install -U llama-index-core
I will look into that, thanks for sharing the link
llama-index-core>=0.12.0
llama-index>=0.12.10
do i need two of them, or the one you said, will work best
is v0.12.14 as I am using requirements.txt
So, llama-index is actually just a wrapper on a bunch of "starter packages"
https://docs.llamaindex.ai/en/stable/getting_started/installation/#quickstart-installation-from-pip

But in any case, those deps seem fine, 0.12.14 is correct πŸ‘
Maybe your IDE is just not updating with the new package installs
It happens to me sometimes too
But the imports should work
thats great, really appreciate the help. let me upgrade it
Plain Text
llama-index-core>=0.12.14
llama-index>=0.12.14


I installed the pip these, but looks like this above Agent Workflow still in white rather changed to that blue type color, even Jsonserializer you can see
do you suggest anything that I could do, so it all looks perfect color coded
I think this is just your IDE -- if you run the code, it should work. Maybe restarting your IDE well help as well
Sure, let me try it
no, bad luck, it remains same:)
sounds sus

Did you actually install the packages? pip install -r requirements.txt ? pip show llama-index-core will show the installed version that python would use
yes I installed it , let me check pip show
Plain Text
pip show llama-index-core
Name: llama-index-core
Version: 0.12.14
Summary: Interface between LLMs and your data
Home-page: https://llamaindex.ai
Author: Jerry Liu
Author-email: jerry@llamaindex.ai
License: MIT
Location: d:\ai\venv\lib\site-packages
Requires: tiktoken, tqdm, deprecated, pillow, nest-asyncio, SQLAlchemy, dataclasses-json, tenacity, PyYAML, httpx, nltk, fsspec, numpy, networkx, filetype, wrapt, typing-extensions, dirtyjson, aiohttp, pydantic, requests, typing-inspect
Required-by: llama-parse, llama-index, llama-index-readers-llama-parse, llama-index-readers-file, llama-index-question-gen-openai, llama-index-program-openai, llama-index-multi-modal-llms-openai, llama-index-llms-openai, llama-index-indices-managed-llama-cloud, llama-index-embeddings-openai, llama-index-cli, llama-index-agent-openai
This is what I have
Ok, and what if you open a python terminal?

For example, on my machine
Plain Text
llama-index-py3.10(base) loganmarkewich@Mac llama_index % python           
Python 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_index.core.agent.workflow import AgentWorkflow
>>> 
yeah it looks same as yours
seems working
I have one question, I created multi agents, but i think i am struggling to setup the root agent with other agents can_handoff_to logic properly. is there any way I can use Context parallelization of agents to do task (like swarms openai as an example)
I'm not sure what you mean. The way the system works, one agent is active at a given time. Tool calls are in parallel (assuming your tools are properly async). You could setup multiple agents as tools if you wanted
I see, I'm struggling to setup tools with agents actually. I was trying to make a complete GitHub repo as a tools to get all data etc, and making agents to help me to work on my GitHub codes etc, so I make one root agent and a few sub agents. But I'm struggling a best way to do FunctionTool call of those to my agent, get some errors. I follow the doc, but doc doesn't have such setup.
Is it ok if I share how I'm doing it, so can get some help
One more follow up question: as tools call are parallel, can agent be parallel as well rather active at a given time?
what errors do you get?
No, thats not how the system is designed. There can only be one "active" agent a time because
  • all "agents" share the same chat history/context
  • when a agent becomes "active", all that happens is the system prompt and tools available change, the chat history stays the same
It really only makes sense to use agents as tools if you want to run them at the same time imo. This could perhaps be built in in the future (having a "dispatch" functionality in addition to "handoff")
I used handoff one followed by latest multi agent workflow doc that posted. I think this like if one agent doesn't have tools access, they communicate with other to gain details.
The errors mostly when I was setting up the tools, so previously I used def instead Async for my tool creation. So that's actually complaining.

Also, I noticed, I am actually using same LLMAIndex OpenAi code, so sometimes the agent complains about token usage, I modified but I do get the issue sometimes so I revert back for now. While looking at more code in LLMAIndex GitHub to find a solution:)
Add a reply
Sign up and join the conversation on Discord