Find answers from the community

Updated 7 months ago

Hello !

At a glance
The community member is trying to rerank nodes using a Large Language Model (LLM) but is encountering an issue with the LLMRerank postprocessor. The issue seems to be related to the output of the LLM not being parsed correctly. The community members discuss two options to address this: 1) providing a custom prompt and parsing function, or 2) writing a custom postprocessor to have more control over the reranking process. A guide on custom node postprocessors is provided as a resource.
Useful resources
Hello !
I'm trying to rerank my nodes using LLM
Plain Text
    def process_retriever_component_fn(self, user_query: str):
        """Transform the output of the sentence_retriver"""

        logger.info("Sentence Retriever Output processing...")
        sentence_retriever = self.index.as_retriever(similarity_top_k=5)

        nodes = sentence_retriever.retrieve(user_query)

        with open("first_nodes.txt", mode="w") as f:
            for node in nodes:
                f.write(str(node) + "\n")
        # Create a QueryBundle from the user query
        query_bundle = QueryBundle(query_str=user_query)

        logger.info("Relevant Node Retrieved...")
        logger.info("Starting the reranking process...")
        postprocessor = LLMRerank(top_n=3, llm=self.llm, choice_batch_size=1)
        reranked_nodes = postprocessor.postprocess_nodes(
            nodes=nodes, query_bundle=query_bundle
        )
        contexts = ""
        with open("second_nodes.txt", mode="w") as f:
            for reranked_node in reranked_nodes:
                f.write(str(reranked_node) + "\n")
                contexts += str(reranked_node) + "\n"
        return contexts

I have the following issue, for the indexing, I'm using SentenceSplitter and the llm is Llama3
Plain Text
File "/usr/local/lib/python3.8/dist-packages/llama_index/core/postprocessor/types.py", line 56, in postprocess_nodes
return self._postprocess_nodes(nodes, query_bundle)
File "/usr/local/lib/python3.8/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 230, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/llama_index/core/postprocessor/llm_rerank.py", line 99, in _postprocess_nodes
raw_choices, relevances = self._parse_choice_select_answer_fn(
File "/usr/local/lib/python3.8/dist-packages/llama_index/core/indices/utils.py", line 104, in default_parse_choice_select_answer_fn
answer_num = int(line_tokens[0].split(":")[1].strip())
IndexError: list index out of range
L
A
5 comments
seems like llama3 did not follow the prompt, and the code struggled to parse the output
So I should give him like a custom prompt and a custom parsing function ?
You can customize that yea. Or you could write your own postprocessor to rerank (its pretty straightforward too)
Alright, Thanks a lot πŸ™‚, will do the second option, to have much more control on it
Add a reply
Sign up and join the conversation on Discord