that appears to the be issue. Looks like LLMRerank parses the output here:
raw_choices, relevances = self._parse_choice_select_answer_fn(
raw_response, len(nodes_batch)
)
But in my case here's what the raw response is:
raw_response: There is no information provided in the summaries of the documents about the author moving to England. All references to locations pertain to places within the United States or Italy (specifically Florence). Therefore, there are no relevant documents to answer the question about why the author moved to England.
This is when using the DEFAULT_CHOICE_SELECT_PROMPT
So it seems like LLMReranker needs better error handling in the case where the LLM doesn't follow the instructions (in this case because it didn't find anything relevant)
@jerryjliu0 fyi