Is there a way to have the openaiagent behaving like openai assistant, as in their response consisting of tool call then message then tool call again until they see their response as good enough ? As opposed to the default in llamaindex openaiagent where the agent will call tool after tool and only send a message when they think the answer is good
I'm using OpenaiAgent stepwise, and I'm wondering how can I stream these original openai output that is saved in chathistory ? Right now I can only manage to stream ToolOutput class, and ChatMessage class... ToolOutput class is kinda unusuable to be fedback into the agent
also is there a way to have somekind of middleware before and after the tool call made by the agent ? I want to publish what the bot is doing, what tool they chose