Hello everyone! I am using the stream_chat method from OpenAIAgent with Azure OpenAI models. I noticed that Azure's models have some filters and I would like to intercept the openai.BadRequest error before returning the message to the user.
However I noticed that in the write_response_to_history func (
https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/chat_engine/types.py#L120), the exception is being captured and only re raised if the
raise_error
argument is set to true. How should I set this flag when doing
agent.stream_chat(user_prompt)
π€