i have this one question. when we parse large pdf files it gets structured into chunks and only relevant chunks are fed to gpt. So the gpt algo doesn't have the idea of the whole pdf and is actually working only on a chunk. But what if one chunk has some data which requires understanding of (i.e. it is related to some other chunk)
EG. chunk256: alex loves bob. chunk290: bob loves only costa.
now if i ask a question is alex's love 2 sided, should i expect a correct answer. And if yes how is that happening
i have made simplevectorindex from 3 docs adding up to 1000 pages. now querying it is taking 1minute sometimes. what is the best way to reduce this time
@jerryjliu0 @Logan M @hwchase17 what is the difference between declaring llm_predictor while defining the index like this: VectorIndex = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor) and while querying the index response = index.query(question,llm_predictor=llm_predictor)