for some reason whenever I load an index.json from disk into a GPTSimpleVectorStoreIndex I cannot pass the service_context:
from langchain.agents import Tool
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.agents import initialize_agent
from gpt_index import GPTSimpleVectorIndex
# define LLM
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0))
# define prompt helper
# set maximum input size
max_input_size = 4096
# set number of output tokens
num_output = 256
# set maximum chunk overlap
max_chunk_overlap = 20
prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=750)
index = GPTSimpleVectorIndex.load_from_disk(save_path='minimalist_entrepreneur_2.json', service_context=service_context)
and get this error:
331 """Run query.
332
333 NOTE: Relies on mutual recursion between
(...)
344 composable graph.
...
45 )
46 llm_predictor = service_context.llm_predictor
47 embed_model = service_context.embed_model
ValueError: Cannot use llm_token_counter on an instance without a service context.
I just made the index and saved it to disk in the same day, with the same llama_index version as well. And yes it does work without the service_context parameter.
@Logan M Are you able to pass service_context when you load from disk? if so, what version of llama-index are you using? I'm using 0.5.13.post1 on my computer
@Logan M it worked when I updated to 0.5.15. Also not sure if it made a difference but I imported GPTSimpleVectorIndex from gpt-index the first time and llama-index currently. Same versions