Find answers from the community

Updated 4 months ago

Hi, I'm using `RouterRetriever` with

At a glance
Hi, I'm using RouterRetriever with LLMSingleSelector. The problem is that the selector prompt in SelectorOutputParser isn't in the format that I want. I have prompt template like this:

Plain Text
SINGLE_SELECTOR_PROMPT_TEMPLATE = (
    "### System:\n"
    "You are a helpful assistant. "
    "Using only the choices above and not prior knowledge, return "
    "the choice that is most relevant to the question: '{query_str}'\n"
    "### User:\n"
    "Some choices are given below. It is provided in a numbered list "
    "(1 to {num_choices}), "
    "where each item in the list corresponds to a summary.\n"
    "---------------------\n"
    "{context_list}"
    "\n---------------------\n"
    "### Assistant:\n"
)
...

retriever = RouterRetriever(
    selector=LLMSingleSelector.from_defaults(
    prompt_template_str=SINGLE_SELECTOR_PROMPT_TEMPLATE),
    retriever_tools=[vs_tool, summary_tool]
)


However, the SelectionOutputParser.parse automatically append a FORMAT_STR like this prompt_template + "\n\n" + _escape_curly_braces(FORMAT_STR) which results in:

Plain Text
### System:
You are a helpful assistant. Using only the choices above and not prior knowledge, return the choice that is most relevant to the question: 'Summarize the uploaded document'
### User:
Some choices are given below. It is provided in a numbered list (1 to 2), where each item in the list corresponds to a summary.
---------------------
(1) Useful for retrieving specific context from uploaded documents.

(2) Useful to retrieve all context from uploaded documents and summary tasks. Don't use if the question only requires more specific context.
---------------------
### Assistant:  ====> This phrase is in the wrong position.

The output should be ONLY JSON formatted as a JSON instance.

Here is an example:
[
    {{
        choice: 1,
        reason: "<insert reason for choice>"
    }},
    ...
]

How can I move the ### Assistant: to right after the FORMAT_STR?
L
s
14 comments
This is happening because the SelectionOutputParser is appending some extra instructions

Probably, you can get a format at least similar to what you want using only LLM settings... what LLM are you using?
It uses alpaca format
I tried asking the same prompt to output the JSON of choice, it works well if the "### Assistant" is in correct position.
Is this with LlamaCPP? Something else?
Yes, it’s llamacpp
i tried using only LLM settings but it doesn't effect the selector prompt 🤔

Plain Text
llm = LlamaCPP(..., messages_to_prompt=messages_to_prompt_alpaca)
the final response is in the correct Alapca format but the selector prompt isn't
no prefixes are added

Plain Text
Some choices are given below. It is provided in a numbered list (1 to 3), where each item in the list corresponds to a summary.
---------------------
(1) Useful for retrieving specific context from uploaded documents.

(2) Useful to retrieve all context from uploaded documents and summary tasks. Don't use if the question only requires more specific context.

(3) Useful for questions not related to uploaded documents.
---------------------
Using only the choices above and not prior knowledge, return the choice that is most relevant to the question: 'Put your previous answer in bullet points'


The output should be ONLY JSON formatted as a JSON instance.

Here is an example:
[
    {{
        choice: 1,
        reason: "<insert reason for choice>"
    }},
    ...
]
Hmm, try passing the service context to the selector? It might not be using your LLM?
You might also have to set completion_to_prompt
In addition to messages_to_prompt
I set the global service context so it's not the case I think
I'll try adding completion_to_prompt
Add a reply
Sign up and join the conversation on Discord