Hi I am builiding rag with various LLMs, some openai ones some using llama_index.llms.azure_inference, and I just realized that default llamaindex LLM class doesn't come with timeout which is built in OpenAI. I wonder if there are some easy ways to implement the timeout, or I missed something?