Find answers from the community

Updated 2 years ago

Loading index

At a glance

The community member created an index using the GPTSimpleVectorIndex and the ChatOpenAI model with the "gpt-3.5-turbo" model. They saved the index to disk, but when they loaded it back, the LLM model had changed to the default "text-davinci-003" which is more expensive. A comment suggests that the community member should pass the llm_predictor when loading the index from disk to ensure the correct model is used.

Hi, I'm having an issue with loading a saved index. I create my index using:
llm_predictor = LLMPredictor(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)) index = GPTSimpleVectorIndex(docs, llm_predictor=llm_predictor)
which uses gpt-3.5-turbo. Then I save it:
index.save_to_disk('./saved_index.json')
Then I reload with:
loaded_index = GPTSimpleVectorIndex.load_from_disk('./saved_index.json')
Now when I check the llm model, it's using the default 'text-davinci-003' which is more expensive.
loaded_index.llm_predictor._llm.model_name
L
1 comment
Make sure you pass in the llm_predictor when loading from disk too πŸ‘Œ
Add a reply
Sign up and join the conversation on Discord