Find answers from the community

Updated 3 months ago

I am having some big problems with my

I am having some big problems with my little proof of concept, i have a few documents on how to create a user in a system and how to login to this custom system

When I ask how do i create a user in systemname - i get a sensible result

When i ask how do i create an account in systemname - i get The context information does not provide instructions on how to create an account in systemname

I am not sure how to get around these sorts of issues. Any tips?
L
b
7 comments
You are using GPT3.5 right? It's had some issues lately with the process of answer refinement

Basically if all the text retrieved to answer a question does not fit into a single LLM call, it will refine an answer over several calls

I've been experimenting with improving this (since OpenAI seems to have downgraded gpt-3.5 lately)

Try this out to customize the refine template, this has worked well in my testing so far

Plain Text
from langchain.prompts.chat import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
)

from llama_index.prompts.prompts import RefinePrompt

# Refine Prompt
CHAT_REFINE_PROMPT_TMPL_MSGS = [
    HumanMessagePromptTemplate.from_template("{query_str}"),
    AIMessagePromptTemplate.from_template("{existing_answer}"),
    HumanMessagePromptTemplate.from_template(
        "I have more context below which can be used "
        "(only if needed) to update your previous answer.\n"
        "------------\n"
        "{context_msg}\n"
        "------------\n"
        "Given the new context, update the previous answer to better "
        "answer my previous query."
        "If the previous answer remains the same, repeat it verbatim. "
        "Never reference the new context or my previous query directly.",
    ),
]


CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = RefinePrompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
...
index.query("my query", similarity_top_k=3, response_mode="compact", refine_template=CHAT_REFINE_PROMPT)
If you are using a vector index, you might also want to lower the chunk_size_limit slightly, maybe to about 1024. This makes the top_k retrieval work a little better
Sorry, I'm dumping a lot of concepts at once here lol
Wow, thank you very much for your help I really appreciate it

I am trying with that code now and lowered to 1024

It helped alot but the response provided for account is wildly different to the same question made with the word user.

Would that be a common problem?
Yea kinda. It depends on what the model is retrieving to answer that question

You can check the sources in the response object

This will dump a list of nodes used to make the response (as well as their similarity scores!)
Plain Text
response = index.query(...)
print(response.source_nodes)
i see so davinci might understand that an account and user are similar words but turbo doesn't?
possibly yes! It just sucks that davinci is 10x more cost than gpt-3.5 :PSadge:
Add a reply
Sign up and join the conversation on Discord