Find answers from the community

t
titus
Offline, last seen 2 weeks ago
Joined September 25, 2024
Hey @Logan M! Remember to upgrade the dependencies of llama-extract, llama-deploy and llama-index-multi-modal-llms-ollama! They're still pointing to llama-index==0.11
2 comments
L
t
Hey! The draw_all_flows utility fails when there's HITL (basically whenever there is an InputRequiredEvent)

Plain Text
```
File /opt/anaconda3/envs/llamaindex/lib/python3.12/site-packages/llama_index/utils/workflow/draw.py:62, in draw_all_possible_flows(workflow, filename, notebook)
     60 for return_type in step_config.return_types:
     61     if return_type != type(None):
---> 62         net.add_edge(step_name, return_type.__name__)
     64 for event_type in step_config.accepted_events:
     65     net.add_edge(event_type.__name__, step_name)

File /opt/anaconda3/envs/llamaindex/lib/python3.12/site-packages/pyvis/network.py:372, in Network.add_edge(self, source, to, **options)
    368 # verify nodes exists
    369 assert source in self.get_nodes(), \
    370     "non existent node '" + str(source) + "'"
--> 372 assert to in self.get_nodes(), \
    373     "non existent node '" + str(to) + "'"
    375 # we only check existing edge for undirected graphs
    376 if not self.directed:

AssertionError: non existent node 'InputRequiredEvent'
1 comment
L
14 comments
t
L
t
titus
·

Flows

Has anyone tried the new LlamaIndex Workflow abstraction?

I find it quite interesting because using the Workflow abstraction requires for developers to be quite familiar with LlamaIndex's lower level abstractions (for e.g. llm.get_tool_calls_from_response(), manually load in chat conversations into memory, etc). Most of us will probably be aware of the RAG abstractions but not the agent ones because it's always just agent.chat(query). The ReAct agent example underscores the biggest difference: ReActAgent.from_tools() is just one line vs writing an entire ReAct agent workflow. If I were to write an agent using multiple RAG tools, would I have to write nested workflows?

I've not had the need to build agents from low level abstractions yet. Does the LlamaIndex team have a use case in mind when designing the "Workflow" abstraction?
5 comments
L
t
For now I think only LLMs can return structured outputs. If I wanted to get an agent to return a structured output, my only way of doing so would be to use tool = FunctionTool.from_defaults(fn={fn}, return_direct=True) right?

Except that I need to be mindful to ensure that my tool output is a string so that I won't mess up the AgentChatResponse output class. So if I specify a pydantic base model as my desired return, I would have to store the key value pairs of the response model as a dictionary then json dumps that dictionary as my return, and then add postprocessing code to handle the return.

Of course the other way would be to write a custom agent using Workflow and handle the ToolOutput there.
1 comment
L
Is anyone facing this: https://github.com/run-llama/llama_deploy/issues/250

I'm not sure why but the moment I deployed my workflow I started having issues with my rag query engine tool
10 comments
t
L
I'm also having some difficulty calling my llm from bedrock using the same code previously - 7 pydantic validation errors 😦
10 comments
L
t
t
titus
·

Llama agents

Hey are you guys going to change the fastapi and llama-index-core dependency for llama-agents 😄 Sorry just running into a lot of dependecy conflicts
19 comments
t
L