Find answers from the community

Updated 12 months ago

Why do Tool Agents randomly abort and

Why do Tool Agents randomly abort and return 'None' with no error message? Debugging shows a 200 response from OAI, I tried setting max_iterations=100 but same result, and just after a few steps
L
C
19 comments
Can you share some sample code? What LLM are you using?
update_product_tool = FunctionTool.from_defaults(fn=update_product)
search_tool = FunctionTool.from_defaults(fn=search)

llm = OpenAI(temperature=0.1, model="gpt-3.5-turbo-1106")
agent = OpenAIAgent.from_tools([update_product_tool, search_tool], llm=llm, verbose=True, max_iterations=100)

async def main():
try:
response = agent.chat(
f"Complete the following JSON object for the SaaS product. Return COMPLETED when you have received SUCCESS as a result. Use search to find missing information.\n\n{product}\n\n{product_schema}",
)
print(response)
except Exception as e:
print(e)

print(json.dumps(product, indent=2))

if name == "main":
asyncio.run(main())


This will run several times, calling those functions back and forth, but after maybe 5-8 times it will return a response of 'None' and exit
hmm, I think maybe it is hitting the max iteration count? Its probably struggling to write the schema for whatever reason.

Maybe a less agentic-approach is needed here? Or using gpt-4?
Yeah I thought maybe it was hitting max_iterations, which defaults to 10, so I set it to 100 and it made no difference, something else is stopping it
I don't think it's even getting to 10 honestly
Is there a way to debug how many iterations it hit?
Since you have verbose=True on the agent, it should be logging each step right?
yes I see it Calling Function several times, but it's unclear if that's multi-function calls, or a single one, but my most recent run, I see 6 function calls before it exits
Yea it should be a log per single function call

Seems like it maybe called 6 times, and then decided it didn't need to call anymore (for whatever reason)

Maybe we can take a step back -- what are you trying to accomplish with this? There might be a better way to approach the problem
Sure, the goal here is to output a JSON object that matches (and validates) against a JSON schema

This is trivial to do in most use cases, but in this particular case, the Schema and resulting Object is rather large, likely to not fit in most token context spaces. Also to complete the Object may require several tool calls to research from different pieces of content (i.e. generally a web search).

I thought an iterative approach using an agentic-design might solve this. The agent could 'update' the object with any sub-properties of the Schema it found, and that would return a JSON Schema validation error of any invalid or missing fields until it's 100% validated (matches the schema, including "required" fields, etc.)
That updates get append to the globally stored object in memory, so it's not necessary for the LLM/agents to keep a complete record of it in context, they only need to worry about the missing or invalid properties. And eventually (likely) I would need some light weight or temporary RAG to store the research history.
Thanks for the help btw Logan, appreciate it πŸ™
hmmm. Is there a way to split the schema into smaller schemas that you merge together later? Just to make it a little more manageble for the LLM?
I agree it seems like using an agent for this is still the right-ish thing to do (still brainstorming a bit on how to make a more structured program)
The little bit I can get it to run I have seen seems promising, like it should work, if only I could get it to keep iterating.

I tried brute forcing it to re-run, and that immediately results in an error of OAI complaining that a tool call was not responded to
Are the two tools query engines? You can actually configure query engines to output a schema as well actually (it would take a tiny bit of tweaking to make that work in an agent setting)
They are not, but that's really interesting, would love to explore that more if you have references?

I'm thinking maybe you're right earlier, maybe the agent thinks the task is done or has exhausted it's task plan? Is there a way to control how/when it knows it's done?
Here's an example with query engine + pydantic inputs
https://docs.llamaindex.ai/en/stable/module_guides/querying/structured_outputs/query_engine.html

Is there a way to control how/when it knows it's done? -- maybe you can play around with a system prompt? OpenAIAgent.from_tools(...., system_prompt="...")
Thanks! Will give these a try
Add a reply
Sign up and join the conversation on Discord