Find answers from the community

Updated 4 months ago

Error

At a glance

A community member is experiencing an error "list index out of range" when querying the Document Summary Index using llama_index 0.6.8. They have provided a full traceback example and additional context, including that they have a set of 10-50 documents used to construct the index. The index works well sometimes, but the error occurs occasionally. Community members have suggested the error may be due to no relevant nodes or an issue with the output formatting. The community member is considering catching the error and informing the user that the language model cannot answer the question.

Hi Sometimes i query to the Document Summary Index, there will be error: list index out of range Is this a bug from llama? I m using llama_index 0.6.8.
L
c
9 comments
Do you have a full traceback example?
Let me try to reproduce tmr.
@Logan M Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Documents/GitHub/chat3pt-python/app.py", line 376, in query_model_product
results = query_product(openai_api_key, app.config["MODEL_BUCKET"], model_json_url, history_messages, query_str, detected_language, chunk_size_limit)
File "/Documents/GitHub/chat3pt-python/query_product.py", line 98, in query_product
response = query_engine.query(query)
File "/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/query/base.py", line 20, in query
return self._query(str_or_query_bundle)
File "/opt/anaconda3/lib/python3.9/site-packages/llama_index/query_engine/retriever_query_engine.py", line 139, in _query
nodes = self._retriever.retrieve(query_bundle)
File "/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/base_retriever.py", line 21, in retrieve
return self._retrieve(str_or_query_bundle)
File "/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/document_summary/retrievers.py", line 81, in _retrieve
raw_choices, relevances = self._parse_choice_select_answer_fn(
File "/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/utils.py", line 100, in default_parse_choice_select_answer_fn
answer_num = int(line_tokens[0].split(":")[1].strip())
IndexError: list index out of range
Thank you for your support
@Logan M More background information is that I have a bunch of documents , around 10-50 documents and the documents are used to construct the doc summary index like below. Sometimes the index works well and the LLM is able to answer my question based on the documents, but the above error will occur occasionally.
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=1000) llm_predictor = LLMPredictor(llm=llm) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, chunk_size_limit=chunk_size_limit, callback_manager=callback_manager) response_synthesizer = ResponseSynthesizer.from_args(response_mode="tree_summarize", use_async=True, service_context=service_context) SUMMARY_QUERY = ( "Give a concise summary of the document in a paragraph.\n" "If the document is a JSON of a product, the summary must include the 'Id' key.\n" ) index = GPTDocumentSummaryIndex.from_documents(documents, summary_query=SUMMARY_QUERY, service_context=service_context, response_synthesizer=response_synthesizer)
Is it because of no relavant nodes to choose?
The above error is likely because of either no relevant nodes, or the output was not formatted correctly πŸ€”
If that is the case, I guess I can only catch the error and tell the user the LLM cannot answer the question? Anyway, appreciate for the help !
Add a reply
Sign up and join the conversation on Discord