If you haven't customized the model, then it will be using
text-davinci-003
You can easily set any model by changing the service context and setting a global service context
from llama_index import ServiceContext, set_global_service_context
from llama_index.llms import OpenAI
service_context = ServiceContext.from_defaults(llm=OpenaAI(model="gpt-3.5-turbo", temperature=0))
set_global_service_context(service_context)
You can also enable debug logs to see extra logging from openai
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))