Find answers from the community

Updated 2 years ago

You can create ChatOpenAI object and

At a glance
The post provides an example of how to create a ChatOpenAI object and pass it to the service_context. The comments suggest that this approach may not be correct, and community members provide alternative ways to set up the language model, such as using OpenAI from the llama_index.llms module. Some community members also mention that the project now has its own LLM abstractions that work better across various features.
Useful resources
You can create ChatOpenAI object and pass it to the service_context like this
Plain Text
from langchain.chat_models import ChatOpenAI
llm = LLMPredictor(llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY,temperature=0, max_tokens=1024, model_name="gpt-3.5-turbo"))


Then pass this into
Plain Text
service_context = ServiceContext.from_defaults(chunk_size_limit=512, llm=llm)
L
W
k
6 comments
I think this is slightly incorrect πŸ‘€
Plain Text
from llama_index.llms import OpenAI

llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
service_context = ServiceContext.from_defaults(llm=llm)
@keychron you may need to share your code again, not sure why it's using davinci baded on the last time I saw it πŸ‘€
Langchain ChatOpenAI is not being used now πŸ˜…?
We have our own LLM abstractions now πŸ’ͺ They work a little better across llama-index features (query engines, chat engines, agents) https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/llms/modules.html
@Logan M full code here
Add a reply
Sign up and join the conversation on Discord