The community member has several TXT files they want to query using GPT, and they want the answers to be in long form. The comments suggest that to achieve this, the community member should adjust the max_tokens and num_output settings, either when creating the index or when querying the data. The comments indicate that these settings are part of the "service context object" and should be passed in when loading the data from disk.
Hi, I have a several TXT files, that i want to query, I want GPT to answer in long form, my question is when im doing the index, should I change the output size so the answers that GPT gives are long form?
If your answers aren't already being truncated, then the model is deciding to give short answers. You might have to also tell it to go more in depth (in addition to changing max_tokens and num_output