Find answers from the community

Updated 4 months ago

hi everyone how do i make index query

At a glance
hi everyone, how do i make index.query() return a long answer? i've tried turning verbose=True but the answers are still very short. i have loaded 10+ documents and want it to return longer answers.
M
J
11 comments
Look into increasing num_output, https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html

verbose arg is deprecated now, and it only controlled logging, not query results πŸ™‚
i did try with prompt_helper and setting num_output to 10000. but its now giving me an error saying my max_chunk_overlap is too big at 20 when the chunk is -4756 =/
That's an unrealistic output size for any current model
i tried num_output = 500 and max_chunk_overlap of 5, still same error though?
It's hard to say what's wrong without seeing the code to construct the index
Plain Text
llm_predictor = LLMPredictor(llm=OpenAI(openai_api_key=st.session_state["OPENAI_API_KEY"], temperature=0,model_name="text-davinci-003"))
prompt_helper = PromptHelper(max_input_size=4096, num_output=2000, max_chunk_overlap=0)
index = doc_search(question,files,llm_predictor)
try:
     with st.spinner("Working hard to get you a good answer..."):
         st.markdown(index.query(question, llm_predictor=llm_predictor, prompt_helper=prompt_helper)) 
except Exception as oops:
     st.error("GPT Server Error: " + str(oops)+" Please try again.")
let me reboot everything and see if that helps lol. thanks for trying to help really appreciate it! πŸ˜„
this is the error i get
Attachment
image.png
And what happens in doc_search? Did it work with defaults? Almost sounds like the index is not getting properly formed
yes it works without prompt_helper
what is the default num_output? 256?
Add a reply
Sign up and join the conversation on Discord