Find answers from the community

Updated 3 months ago

Will the choice of chunk size limit also

Will the choice of chunk_size_limit also afffect the query performance for GPTListIndex?
a
6 comments
yes. the response synthesizer will use the create and refine method to make calls with context from each node in the index.
how your data is chunked can have a complex relationship with the method and what you are trying to discover as, with each call gpt will only get a chunk, and then there will be another call with another chunk, and so on, attempting to improve the response each time
if you think of a complex question that you could answer from the information across a 3 page document, such that, you would need context from all 3 pages to fully answer the question. You as a human have the ability to think about the content on all 3 pages of the document and how the combined information could answer the question, all in a single motion. But, the way this index would work would be like, we submit the question with just page 1 of data, and chatgpt will create some answer as best it can from page 1, and then it will run another query with the answer it created from page 1 along with the data from page 2, with the question, can the information from page 2 be used to improve the answer it created from page 1, and so on.
This is a very inferior method to how humans could do it, or how it would work if chatgpt could look at the entirety of the context. The results depend entirely on the question. Some questions may be able to be answered entirely from one prompt, and some may be able to be answered well from a sequential improvement like this, but, you could also have some questions that do not work well with this approach. I would say best case scenario, this approach might approximate what you could get from a single large contextual prompt, and in many cases it will perform worse.
Given the example I proposed where a question could be answered by someone who knew the information that was on 3 documents. It is possible using this query method that the response synthesizer could ask the question to gpt, providing data from page1 as context, the response could come back "there was not enough information in the context to provide a response. Then next it would pass in the query with context from page 2, and it could again say not enough information to answer. So even though there is enough information in all 3 pages to answer the question, it could result being unable to provide any response.
but there is nothing we can really do about this, maybe doing a good fine tuning dataset with training could help some with that problem, but who knows how much training data one might need to compensate, it is unknown. Net net, this is the current best method I think, despite its drawbacks.
Add a reply
Sign up and join the conversation on Discord