Find answers from the community

Updated last year

Migratw

Hi, I updated to the latest llamaindex and I am having trouble migrating my code. This was my code before migration:
Plain Text
llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-0613", streaming=False, max_tokens=1000))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, callback_manager=callback_manager)

and now this is example code that is giving me an error after migrating to the latest llamaindex version:
Plain Text
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm)
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context)
query_engine = index.as_query_engine()
response = query_engine.query("hi")
print(response)

The error I get is AttributeError: 'ServiceContext' object has no attribute 'llm' What am I doing wrong here please? πŸ˜… I tried following this guide but unsuccessfully https://gpt-index.readthedocs.io/en/latest/how_to/customization/llms_migration_guide.html
L
n
M
9 comments
I will try running this locally in a bit! Just letting you know I've seen this lol
Hmm, so it's the second line that is causing problems? It seems to work for me

(also heads up, when you load an index from storage, you'll want to pass the service context back in as a kwarg)
index = load_index_from_storage(storage_context, service_context=service_context)
Plain Text
from llama_index import LLMPredictor

llm = OpenAI(temperature=0, model_name="text-davinci-003")
llm_predictor = LLMPredictor(llm=llm)
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

Not sure if that will get you where you need to go but its what i've done to get around that issue in the last hour
where is OpenAI coming from?

This was my code

Plain Text
from llama_index.llms import OpenAI
from llama_index import ServiceContext
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm)
didn't paste all my code just grabbed a snippet
i was pulling it from Langchain unfortunately
Ah! That's the culprit then I think
I was actually using the from llama_index.llms import OpenAI and it still wasn't working, but I did something and it works now. Either upgrading to 0.7.1 from 0.7.0 or creating new virtualenv worked. So thanks for help!
Haha nice! Glad it works now! πŸ’ͺ
Add a reply
Sign up and join the conversation on Discord