Find answers from the community

Updated 2 years ago

Does anyone know if gpt 4 is now

At a glance
Does anyone know if gpt-4 is now supported?
L
V
5 comments
Yea, it's always been supported actually

Plain Text
from langchain.chat_models import ChatOpenAI
from llama_index import LLMPredictor, ServiceContext

service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=ChatOpenAI(model_name='gpt-4', temperature=0)))
@Logan M Thank you for this! I am running this one on an AWS EC2 instance and often run into out-of-memory error. What would be the optimal amount of memory to use?
Oh really? That's kind of weird. I would expect memory to not really be an issue unless you are creating huuuuge indexes (and in that case, you should be using a hosted vector store)

How much memory do you have currently? I would think 4-8GB would be enough for most use cases
I think it's the number of indexes. Currently, I am trying it on a 2GB instance but yeah I would probably upgrade to a higher memory instance.
Ah, that makes sense! 2GB is definitely a little small
Add a reply
Sign up and join the conversation on Discord