Llm Re-rankers Performing Better Than Cross Encoders
Llm Re-rankers Performing Better Than Cross Encoders
At a glance
The community member who posted the original question asked if large language model (LLM) re-rankers have been performing better than cross encoders, aside from token cost. In the comments, another community member responded that LLMs usually perform better, but noted that LLM re-rankers can also fail by outputting unparsable results and are much slower. There is no explicitly marked answer to the original question.