The community members are discussing how to check the active large language model (LLM) being used for a query. One community member asks the simplest way to do this, and another confirms they want to check which LLM is generating the response. The community members note that the default is GPT-3.5, but they sometimes change the model and would like to double-check which one is being used. They also discuss setting the LLM using Settings.llm = OpenAI(temperature=0.1, model=config.get("llm_model")), and confirm that this should set the LLM for everything. However, there is no explicitly marked answer in the comments.