Find answers from the community

Home
Members
sajjadazami
s
sajjadazami
Offline, last seen 3 months ago
Joined September 25, 2024
Hi, a few questions:
  1. I'm indexing a document using the standard usage pattern (https://gpt-index.readthedocs.io/en/latest/guides/primer/usage_pattern.html#customizing-llm-s), but the resulting index is only one node. Therefore, each query, uses about 6K tokens. How do I go about debugging this?
  2. Is there a good source (example notebook, article etc.) about the effect of parameters in using GPTSimpleVectorIndex? (chunk size, max_input_size etc.)?
9 comments
L
s
Hi, I'm having an issue with loading a saved index. I create my index using:
llm_predictor = LLMPredictor(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)) index = GPTSimpleVectorIndex(docs, llm_predictor=llm_predictor)
which uses gpt-3.5-turbo. Then I save it:
index.save_to_disk('./saved_index.json')
Then I reload with:
loaded_index = GPTSimpleVectorIndex.load_from_disk('./saved_index.json')
Now when I check the llm model, it's using the default 'text-davinci-003' which is more expensive.
loaded_index.llm_predictor._llm.model_name
1 comment
L