Find answers from the community

Home
Members
Chancellor Hands LLC
C
Chancellor Hands LLC
Offline, last seen 3 months ago
Joined September 25, 2024
Is there a way to do that on version 0.5.15?
5 comments
C
L
for some reason whenever I load an index.json from disk into a GPTSimpleVectorStoreIndex I cannot pass the service_context:

from langchain.agents import Tool from langchain.chains.conversation.memory import ConversationBufferMemory from langchain import OpenAI from langchain.agents import initialize_agent from gpt_index import GPTSimpleVectorIndex # define LLM llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0)) # define prompt helper # set maximum input size max_input_size = 4096 # set number of output tokens num_output = 256 # set maximum chunk overlap max_chunk_overlap = 20 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=750) index = GPTSimpleVectorIndex.load_from_disk(save_path='minimalist_entrepreneur_2.json', service_context=service_context)

and get this error:

331 """Run query. 332 333 NOTE: Relies on mutual recursion between (...) 344 composable graph. ... 45 ) 46 llm_predictor = service_context.llm_predictor 47 embed_model = service_context.embed_model ValueError: Cannot use llm_token_counter on an instance without a service context.
11 comments
C
L