Find answers from the community

Updated 3 months ago

Llama agents

Hey are you guys going to change the fastapi and llama-index-core dependency for llama-agents πŸ˜„ Sorry just running into a lot of dependecy conflicts
L
t
19 comments
Sorry, what's this conflict?

Llama agents has a huge refactor coming soon fyi πŸ˜…
I've got fastapi conflicts with chainlit, and the llama-index-core version requirements is behind where llama-index-core is.
oh amazing! can't wait for the refactor! πŸ˜„ Got a timeline on when this will be coming?
Chainlit allows for me to build a chat copilot embedded into a website using just python code (https://docs.chainlit.io/deploy/copilot) which is why i'm using it. the problem is the website is so vast that I've decided to deploy a multi-agent system to answer questions (separation of concerns), and I was thinking of using llama-agents to implement this. For the time being I'm attempting to do this using the llama-index-core workflow abstraction with my backup being to deploy each llama-index-agent as a LlamaIndexConversableAgent with Autogen
Going with workflows is the way to go. The llama-agents refactor will make this easier to deploy/scale hopefully πŸ™ timeline is likely initial release this week or early next week
What dependency is causing issues though for you? Pydantic? Something else?
With chainlit, fastapi, starlette and uvicorn are the main dependency conflicts. with llama-index-core there are no explicit dependency conflicts - sorry I thought there would be because the llama-index-core version is 0.10.68 but the llama-agents pyproject toml file stated ^0.10.50
To orchestrate multi-agent systems using Workflow, how should I go about instantiating a common memory across the entire workflow so that all the agents within the system can have access to the same memory?
I would make the memory something you pass into the workflow
Hmm I wouldn't because each agent already has memory. I just need to think of a way to synchronize every agent's memory together. But thank you so much for the suggestion!

Unlike langgraph, Workflow never breaks my heart when I use it haha
Yea but you can override the agents memory

agent.chat("Hello", chat_history=chat_history)

So you can manage the memory outside of the agent

That would be the best way to sync up the memory of many agents
@Logan M are you guys intending to update the FastAPI dependencies for llama-deploy to a more recent version FastAPI? πŸ˜„
and i didn't know it was so easy to do nested workflows! amazing
Oh wow, I forget the deps were still restrictive lol
pushed llama_deploy v0.1.1 which loosens the fastapi dep
thank you so much!
Add a reply
Sign up and join the conversation on Discord