I’m using llama index at the moment, and when using reAct while chatting with the api, sometimes I get a max iterations error. Is there any easy way to increase the iterations without forking? Or is there any other way around this?
Hey guys, does anybody know how to capture the full output of the reAct chats thought process and observation and such? I have a poor solution of wrapping the whole program with another python script to capture the full terminal output, but nothing I do inside of my python application seems to actually capture its verbose logs.