Find answers from the community

Updated 3 months ago

Hi guys I have a question im using llama

Hi guys I have a question: im using llama_index in a python script for an GPT-bot with custom data. But i cant really see where it access the GPT-API and which gpt-modell is used.
L
S
4 comments
If you haven't customized the model, then it will be using text-davinci-003

You can easily set any model by changing the service context and setting a global service context

Plain Text
from llama_index import ServiceContext, set_global_service_context
from llama_index.llms import OpenAI

service_context = ServiceContext.from_defaults(llm=OpenaAI(model="gpt-3.5-turbo", temperature=0))

set_global_service_context(service_context)


You can also enable debug logs to see extra logging from openai

Plain Text
import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
so it is using gpt3 already
Add a reply
Sign up and join the conversation on Discord