Find answers from the community

Updated 2 weeks ago

I keep seeing httpx in the warning output of the posts going to azure open ai is the somet

@kapa.ai I keep seeing httpx in the warning output of the posts going to azure open ai is the something in llama index that keeps showing this only in my Jupyter notebooks
k
c
6 comments
Please stand by @cmosguy as I search the knowledge base. I normally take about 15 seconds or so. :wave:
----------------------
Based on the knowledge sources provided, I can offer some insight into your question about seeing httpx in the warning output related to Azure OpenAI in your Jupyter notebooks when using LlamaIndex.

The httpx messages you're seeing are likely debug or info-level logs from the HTTP client library that LlamaIndex uses to make API calls. Several of the provided documents show examples of these logs, which appear to be common when using LlamaIndex with various API services, including Azure OpenAI and Mistral AI.

For example, in the LlamaIndex documentation on fine-tuning, we can see similar log outputs:

Plain Text
INFO:httpx:HTTP Request: POST <https://api.openai.com/v1/chat/completions> "HTTP/1.1 200 OK"
HTTP Request: POST <https://api.openai.com/v1/chat/completions> "HTTP/1.1 200 OK"


These logs are not errors, but rather informational messages about the HTTP requests being made. They're likely more visible in Jupyter notebooks due to how Jupyter handles output from Python code.

If you find these messages distracting, you might be able to adjust the logging level in your LlamaIndex configuration or in your Jupyter environment to suppress these info-level logs. However, the knowledge sources don't provide specific instructions on how to do this within LlamaIndex.

It's worth noting that these logs can be useful for debugging purposes, as they provide information about the API calls being made by LlamaIndex.
If you're concerned about these logs or if they're causing any issues with your use of LlamaIndex, you may want to consult the official LlamaIndex documentation or reach out to their support channels for more specific guidance on managing log output in your particular setup.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai is there some logging configuration option to hide these in llama index?
Let me check the knowledge base for you again. One moment! :books:
----------------------
Based on the knowledge sources provided, I don't have specific information about logging configuration options in LlamaIndex to hide the httpx messages you're seeing. The documents mainly show examples of these logs appearing, but don't discuss how to configure or suppress them within LlamaIndex itself.

However, since these are logging messages, it's likely that you can control them using Python's standard logging configuration. LlamaIndex probably uses Python's logging module, which allows for adjusting log levels.

Here's a general approach you could try, although I can't guarantee it will work without specific LlamaIndex documentation:

  1. Import the logging module at the beginning of your script or notebook:
Plain Text
import logging


  1. Set the logging level to a higher level, such as WARNING, which should suppress INFO level messages:
Plain Text
logging.getLogger("httpx").setLevel(logging.WARNING)


You might also need to set the level for the LlamaIndex logger, if it exists:

Plain Text
logging.getLogger("llama_index").setLevel(logging.WARNING)


These steps should theoretically reduce the amount of log output you're seeing, including the httpx messages.

However, I must emphasize that this is a general logging approach and not specifically documented for LlamaIndex based on the knowledge sources provided. If this doesn't work, or if you need more specific guidance, I recommend checking the official LlamaIndex documentation for any logging configuration options, or reaching out to their support channels or community forums for more tailored advice.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord