Find answers from the community

Updated last year

You can create ChatOpenAI object and

You can create ChatOpenAI object and pass it to the service_context like this
Plain Text
from langchain.chat_models import ChatOpenAI
llm = LLMPredictor(llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY,temperature=0, max_tokens=1024, model_name="gpt-3.5-turbo"))


Then pass this into
Plain Text
service_context = ServiceContext.from_defaults(chunk_size_limit=512, llm=llm)
L
W
k
6 comments
I think this is slightly incorrect πŸ‘€
Plain Text
from llama_index.llms import OpenAI

llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
service_context = ServiceContext.from_defaults(llm=llm)
@keychron you may need to share your code again, not sure why it's using davinci baded on the last time I saw it πŸ‘€
Langchain ChatOpenAI is not being used now πŸ˜…?
We have our own LLM abstractions now πŸ’ͺ They work a little better across llama-index features (query engines, chat engines, agents) https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/llms/modules.html
@Logan M full code here
Add a reply
Sign up and join the conversation on Discord