Find answers from the community

Updated last year

Anyone hit this error when using a

At a glance

The post asks if anyone has encountered an error when using a reranker. The comments suggest that the issue is related to the language model not following the instructions, which caused the llama-index to be unable to parse the output. A community member provides more details, explaining that the raw response from the language model indicates that it did not find any relevant information to answer the question about the author moving to England. The comments suggest that the LLMReranker needs better error handling in such cases, and that using RankGPT or a normal sentence transformer reranker might be a better choice. There is no explicitly marked answer in the comments.

Anyone hit this error when using a reranker?
L
b
4 comments
Seems like the llm did not follow instructions, and llama-index couldn't parse the output
that appears to the be issue. Looks like LLMRerank parses the output here:
Plain Text
            raw_choices, relevances = self._parse_choice_select_answer_fn(
                raw_response, len(nodes_batch)
            )

But in my case here's what the raw response is:
raw_response: There is no information provided in the summaries of the documents about the author moving to England. All references to locations pertain to places within the United States or Italy (specifically Florence). Therefore, there are no relevant documents to answer the question about why the author moved to England.
This is when using the DEFAULT_CHOICE_SELECT_PROMPT

So it seems like LLMReranker needs better error handling in the case where the LLM doesn't follow the instructions (in this case because it didn't find anything relevant)
@jerryjliu0 fyi
If you want to make a PR for this, it would be appreciated!
In general though, either RankGPT or a normal sentence transformer reranker is probably a better choice here
Add a reply
Sign up and join the conversation on Discord