Find answers from the community

Updated 2 years ago

Found out that it oftentimes produces

Found out that it oftentimes produces the correct JSON but it keeps printing this as part of the response: The new context does not provide any additional information, so the original answer remains the same. so it messes it up when I try loading the response into a JSON file. Do you know how to remove this additional commentary it keeps returning?
L
p
E
17 comments
lol classic gpt-3.5 I'm guessing. Did you customize the refine prompt as well? I forget if you did
I don't think so? I'm passing in my detailed prompt and I'm using QASummaryGraphBuilder
Yes correct gpt-3.5 haha
Yea, you might want to try customizing the refine template as well

I think I have an example somewhere that helped that issue, one sec
Plain Text
from langchain.prompts.chat import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
)

from llama_index.prompts.prompts import RefinePrompt

# Refine Prompt
CHAT_REFINE_PROMPT_TMPL_MSGS = [
    HumanMessagePromptTemplate.from_template("{query_str}"),
    AIMessagePromptTemplate.from_template("{existing_answer}"),
    HumanMessagePromptTemplate.from_template(
        "I have more context below which can be used "
        "(only if needed) to update your previous answer.\n"
        "------------\n"
        "{context_msg}\n"
        "------------\n"
        "Given the new context, update the previous answer to better "
        "answer my previous query."
        "If the previous answer remains the same, repeat it verbatim. "
        "Never reference the new context or my previous query directly.",
    ),
]


CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = RefinePrompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
...
query_engine = index.as_query_engine(..., refine_template=CHAT_REFINE_PROMPT)
It wasn't perfect, but seemed to help. Maybe use that as an example lol
Wonder if the new guardrails might be something to help with this? Could avoid the extra lengthly prompt template.
Where would I place the refine_template if I'm using the graph with the query_configs?
I think in the query_kwargs of each query config
And I would still pass in the query_str into the query correct?

So something like this:

Plain Text
    # Set query config
    query_configs = [
        {
            "index_struct_type": "simple_dict",
            "query_mode": "default",
            "query_kwargs": {
                "similarity_top_k": 1,
                "response_mode": "compact",
                "refine_template": CHAT_REFINE_PROMPT
            },
        },
        {
            "index_struct_type": "list",
            "query_mode": "default",
            "query_kwargs": {
                "response_mode": "tree_summarize",
                "use_async": True,
                "verbose": True,
                "refine_template": CHAT_REFINE_PROMPT
            },
        },
        {
            "index_struct_type": "tree",
            "query_mode": "default",
            "query_kwargs": {
                "verbose": True,
                "refine_template": CHAT_REFINE_PROMPT
            },
        },
    ]

    try:
        print("BEFORE Querying Graph.query: ", graph)
        response = graph.query(
            query_str=query_str, 
            query_configs=query_configs, 
            service_context=service_context_chatgpt,
        )
...
Can I define the CHAT_REFINE_PROMPT_TMPL_MSGS globally or do I have to define my query_str first and then define CHAT_REFINE_PROMPT_TMPL_MSGS for each separate query?
Nah its global. The format variables get filled in during runtime
When I try running it it just says The new context does not provide any additional information that would change the previous answer. The previous answer remains the same.
Lol rip. Guess it still doesn't want to follow instructions πŸ™ƒ
So frustrating to work with gpt3.5... they've really dumbed it down
Assuming it was using the new template I suppose
Yeah I had the same impression, it was working fine a few weeks ago. I'll give guardrails a shot and see if that works
Add a reply
Sign up and join the conversation on Discord