Find answers from the community

Updated 2 months ago

Llm Re-rankers Performing Better Than Cross Encoders

At a glance

The community member who posted the original question asked if large language model (LLM) re-rankers have been performing better than cross encoders, aside from token cost. In the comments, another community member responded that LLMs usually perform better, but noted that LLM re-rankers can also fail by outputting unparsable results and are much slower. There is no explicitly marked answer to the original question.

@Logan M aside from token cost... have you notice llm re-rankers performing better than cross encoders?
L
1 comment
eh yea, usually LLMs will be better (although this also depends on the LLM you are using lol)

LLM rerankers can also fail though too (outputting some output that can't be parsed). They are also much slower
Add a reply
Sign up and join the conversation on Discord