Find answers from the community

Updated 3 months ago

Agent

Hey all, anyone have an example of building an agent using a function calling llm where they stream the final output? There is the option of passing the full final message to a final step and streaming that but you'll get a latency hit; I haven't found a nice solution yet as the full message is required to determine if a function call is required.
L
D
9 comments
Are you building the agent from mostly scratch?

Detecting the tool call is very sneaky. I did it here in a workflow, using an async generator that first returns a boolean to determine if it's a tool call, and then then returns the stream if not
https://colab.research.google.com/drive/1UjDJMyXR11HKIki3tuMew6EEzq91ewYw?usp=sharing#scrollTo=1XoDZK0YvQQe
Yeah mostly from scratch using Workflows, I had a similar idea to what you did so i'll take a look at the notebook. Was it relatively successful?
Seemed to work pretty well!
I was also thinking of creating a "Final Answer" tool that required only a boolean to limit output tokens and then passing on the final message to a final step if that tool was called
Another option that could also work, is using the new event streaming api

(It wasn't available yet when I wrote that notebook)
https://docs.llamaindex.ai/en/stable/understanding/workflows/stream/
This could work too! But depends on the LLM properly calling this tool
Just curious, but are you building this as part of where you work? Workflows are new, so always curious about the usecases and business cases people are working on with them πŸ”₯
Yeah I actually had a discussion with Biswaroop recently about the use cases and was actually going to ping him again for a follow up call, i'll mention you be included as well if interested
Oh sweet! Bis already chatted πŸ”₯
Add a reply
Sign up and join the conversation on Discord