Find answers from the community

Updated 2 months ago

Hi, I'm trying to use `as_structured_

Hi, I'm trying to use as_structured_llm with llama_index.llms.openai.OpenAI and pass that as the llm to llama_index.agent.openai.OpenAIAgent.from_tools(), but I'm getting an error ValueError: llm must be a OpenAI instance. It seems as_structured_llm changes the LLM enough that it's not considered an OpenAI instance anymore.

Is there a better way to produce structured output from an OpenAIAgent with tools?
L
U
20 comments
hmm, seems like a bug, workaround would be just using a function calling agent

Plain Text
from llama_index.core.agent import FunctionCallingAgentWorker

agent = FunctionCallingAgentWorker.from_tools(...),as_agent()
Thinking about this some more, maybe as_structured_llm will prevent the LLM from invoking tools?
I'll try this. Thanks
oh hmm, good point actually. It might actually be breaking the agents tool calling too πŸ˜…
Logically, I'm not sure how to accomplish both. πŸ€”
Me neither actually. Seems like theres a general incompatibility with how the structured llm works and how agents work
The API rejects this as well. AttributeError: 'StructuredLLM' object has no attribute 'achat_with_tools'
Maybe I can ask the non-structured LLM to answer the prompt with sufficient context and then run a follow-up with the structured LLM to format it into the needed structure.
For future reference, this kinda works:
Plain Text
await agent.achat(question.prompt)

messages = [ 
    agent.memory.get_all()[-1],
    ChatMessage.from_str("Please convert your previous message into the required format."),
]

response = await structured_llm.achat(messages=messages)


Where agent has the tool(s) and structured_llm has the structure. The parsed object is in response.raw.
I thought I was clever and tried passing a tool for the output. This is similar to how StructuredLLM does it, but I'm including my other tools as well. Unfortunately, the AgentRunner forces tool_choice to "auto" after the first step, and the LLM doesn't call my output function. 😦 πŸ€”
Maybe I can make a custom agent πŸ€”
We do have examples of custom agents with workflows πŸ‘€
Ended up doing this for now:
Attachment
image.png
It's a hack, but it avoids the extra LLM call without re-implementing a bunch of other step-related stuff.
That ended up being too unstable, and I ended up implementing a custom workflow based on this example:
https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/
custom workflow is the way to go imo -- more flexible, easier to add features. At the cost of slightly more code
But now you have full control
Expect our docs to slowly push these approaches over time πŸ™‚
Add a reply
Sign up and join the conversation on Discord