Find answers from the community

Updated 2 years ago

Refining answers

When creating an index and asking a question, I'm getting answers as if the question has already been asked.
Is the refine process causing some strange output results?

Example:
I have an index pertaining to some company information, but when I ask, "How can I get in contact" the response is along the lines of "The original answer is already comprehensive and provides multiple ways for users to contact us. However, if you have any queries or concerns about...."
1
L
n
J
26 comments
This is a common problem I've seen with chatgpt.

I think the prompt template used for refining needs to be changed tbh

A quick thing to try is using the ChatGPTLLMPredictor class, and adding a system message with more explicit instructions

See this Screenshot as an example
You could add a system message saying something like "when presented with new context and a previous answer, repeat the previous answer if the new context is not relevant"
@JW you might be interested in this as well
@Logan M - Kind of an odd one.... I attempted what you suggested above. Now I'm getting complete nonsense:
Attachment
image.png
This is being queried on an index pertaining to product information... of which none of the details are present in this mess.
Oof OK, so don't do that then 🀣
The other thing to try is a little more complex, so I was hoping that would work
The default refine prompt is in here: CHAT_REFINE_PROMPT

You can follow the same the same process and create your own refine prompt, maybe with slightly different instructions, and pass it in during the query

index.query(...., refine_template=my_refine_template)
Oh, awesome, I'll take a look. Thanks!
Plain Text
    memory = ConversationBufferMemory(memory_key="chat_history")
    agent_chain = create_llama_chat_agent(
        toolkit,
        llm,
        memory=memory,
        verbose=True
    )

Hi, I'm using the chat_agent, how can I pass the refine_template to the chat_agent?
When you create the toolkit, there is an option to pass in query kwargs in the query configs

Something like this

Plain Text
query_configs = [
    {
        ...
        "query_kwargs": {
            ...
            "refine_template": my_refine_template
        },
        ...
    },
    ...
]
Thanks, I'll try
Plain Text
    toolkit = LlamaToolkit(
        index_configs=index_configs,
        graph_configs=graph_configs
    )

the toolkit contain index_configs and graph_configs, which should I modify to add refine_template ?
Hmmm, looking at the chatbot tutorial (https://gpt-index.readthedocs.io/en/latest/guides/tutorials/building_a_chatbot.html), it is a little confusing isn't it haha

I think in the graph config makes sense. Like in this screenshot, I would add the refine template to the query configs there
Attachment
image.png
mmm actually, maybe both haha just to be safe
also add it to index_query_kwargs too
Attachment
image.png
just to be safe lol
Great, thanks
Thanks. It works. But adding refine_template at index_query_kwargs will cause error. Just adding it at query_configs works.
Now, when I ask the same question, it will reply the similar answer.
Nice! Glad it works then 😌
@JW .. this seems to answer the same problem i'm having: ChatGPT LLMPredictor returning responses like, "..There, the answer still stands as follows:..", and , "Based on the new context, ..." . (and its mentioning the context even though i'm adding the prompt refinement "Do not mention the context") . How do i integrate/solve this challenge - i'm simply doing "llama_response = index.query(prompt, response_mode="compact", service_context=service_context)"
@falconview_99 Sorry for the late reply. For index.query, you might need to add refine_template=CHAT_REFINE_PROMPT
Plain Text
index.query(query_text, similarity_top_k=query_configs, response_mode="compact", text_qa_template=TEXT_QA_TEMPLATE, refine_template=CHAT_REFINE_PROMPT)
Add a reply
Sign up and join the conversation on Discord