Find answers from the community

Updated 2 years ago

for some reason whenever I load an index

for some reason whenever I load an index.json from disk into a GPTSimpleVectorStoreIndex I cannot pass the service_context:

from langchain.agents import Tool from langchain.chains.conversation.memory import ConversationBufferMemory from langchain import OpenAI from langchain.agents import initialize_agent from gpt_index import GPTSimpleVectorIndex # define LLM llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0)) # define prompt helper # set maximum input size max_input_size = 4096 # set number of output tokens num_output = 256 # set maximum chunk overlap max_chunk_overlap = 20 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=750) index = GPTSimpleVectorIndex.load_from_disk(save_path='minimalist_entrepreneur_2.json', service_context=service_context)

and get this error:

331 """Run query. 332 333 NOTE: Relies on mutual recursion between (...) 344 composable graph. ... 45 ) 46 llm_predictor = service_context.llm_predictor 47 embed_model = service_context.embed_model ValueError: Cannot use llm_token_counter on an instance without a service context.
L
C
11 comments
this is pretty weird, never had an issue with loading the index πŸ€” Was the index created a long time ago?

You could try this instead
index = GPTSimpleVectorIndex.load_from_disk('minimalist_entrepreneur_2.json', service_context=service_context)

I'm assuming it works without the service context?
I just made the index and saved it to disk in the same day, with the same llama_index version as well. And yes it does work without the service_context parameter.
@Logan M Are you able to pass service_context when you load from disk? if so, what version of llama-index are you using? I'm using 0.5.13.post1 on my computer
I sure am, I've never had an issue doing it πŸ˜… I'm still on 0.5.7 it looks like, I'll try upgrading
Upgraded to 0.5.15, works fine for me πŸ€”
Maybe you need to specify the model name in ChatOpenAI? All my code is on github here https://github.com/logan-markewich/llama_index_starter_pack/blob/main/streamlit_vector/streamlit_demo.py
@Logan M it worked when I updated to 0.5.15. Also not sure if it made a difference but I imported GPTSimpleVectorIndex from gpt-index the first time and llama-index currently. Same versions
Oh! I must have missed that earlier. Definitely use the llama_index imports πŸ™πŸ™
Ah I see. I assumed that they were identical but I guess not.
Thanks for your help btw! @Logan M
Add a reply
Sign up and join the conversation on Discord