Find answers from the community

Updated 9 months ago

llama_index/llama-index-core/llama_index...

Hello everyone! I am using the stream_chat method from OpenAIAgent with Azure OpenAI models. I noticed that Azure's models have some filters and I would like to intercept the openai.BadRequest error before returning the message to the user.
However I noticed that in the write_response_to_history func (https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/chat_engine/types.py#L120), the exception is being captured and only re raised if the raise_error argument is set to true. How should I set this flag when doing agent.stream_chat(user_prompt) πŸ€”
L
L
2 comments
Hey @Logan M I am evaluating posting a PR with the possibility of setting this flag, raise-error, although its not clear to me if it should be a common additional argument to the function signature, or an extra_kwargs in the task definition. Do you have some suggestion about this?
I mean, my instinct would be to have it as a setting at the agent level
Add a reply
Sign up and join the conversation on Discord