Find answers from the community

Updated 3 months ago

Logan M 8260 hey

hey)

Is there any info how to add memory in my current Llama index (GPTVectorStoreIndex) and add system message for my bot?
L
k
59 comments
You can add a system prompt by customizing the prompt templates

You'll want to customize the refine template and the text qa template

The default qa template is here (it gets transformed into a single human role message):
https://github.com/jerryjliu/llama_index/blob/bc9546ca234beec6d13eba2e290d17cc36c633a1/llama_index/prompts/default_prompts.py#L98

The refine template is here (this is the format you'll likely want to follow for adding a system message)
https://github.com/jerryjliu/llama_index/blob/bc9546ca234beec6d13eba2e290d17cc36c633a1/llama_index/prompts/chat_prompts.py#L12

query_engine = index.as_query_engine(text_qa_template=my_qa_template, refine_template=my_refine_template)

In the templates, you can insert the chat history (or try out one of our chat engines, if they suit your needs)
@Logan M What about storing context for user messages? Should I have a list of previous context? Or how will it techincally work?
Yea probably just a list of messages that you save to disk (maybe a pickle?). If you use the react chat agent, you can use the memory modules from langchain too
Up to you how to best do it 🙂
Which agent / module do you recommend to use for chat bot? @Logan M
Up to your experience tbh. Everything has it's pros and cons lol

I haven't played around with any enoug to have a strong opinion, but right now toay I'd probably choose the react agent
Probably I did something wrong:( @Logan M

AttributeError: 'list' object has no attribute 'partial_format'
Attachment
CleanShot_2023-06-13_at_04.08.272x.png
I saw problem with variables but still have the same issue @Logan M
Attachment
CleanShot_2023-06-13_at_04.14.182x.png
oh, you missed two steps with the template, easy fix 🙂

CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = RefinePrompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
Maybe I missed something important about logic @Logan M
Attachment
CleanShot_2023-06-13_at_04.27.292x.png
Because it didn't see a previous context
Yea, you'll have to update the prompts with chat history (as well as add the system prompt, if you want, using SystemMessagePromptTemplate)
For some reason it didn't change at all:( Just empty @Logan M
Attachment
CleanShot_2023-06-13_at_04.42.482x.png
Those variables (existing answer, context_msg) are actually filled in automatically by llama-index
Give me one sec, I'll make an example for you
Sure, will wait:)

Thank you so much in advance:) @Logan M
lol might take a sec, got stuck in a meeting lol
No prob:) @Logan M


Not urgent, but I’m really curious about solution😁
Here's a very basic example how how this might work

Plain Text
from langchain.prompts.chat import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)

from llama_index.prompts.base import Prompt

# you'll want to limit the length of this!
chat_history = (
    "Human: Hello, who are you?\n"
    "AI: I am doing well, how are you?\n"
)

CHAT_REFINE_PROMPT_TMPL_MSGS = [
    SystemMessagePromptTemplate.from_template("My system message"),
    HumanMessagePromptTemplate.from_template("{query_str}"),
    AIMessagePromptTemplate.from_template("{existing_answer}"),
    HumanMessagePromptTemplate.from_template(
        "We have the opportunity to refine the above answer "
        "(only if needed) with some more context below.\n"
        "------------\n"
        "Chat History:\n"
        f"{chat_history}\n"
        "New Context:\n"
        "{context_msg}\n"
        "------------\n"
        "Given the new context, refine the original answer to better "
        "answer the question. "
        "If the context isn't useful, output the original answer again.",
    ),
]

CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = Prompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)

CHAT_QA_PROMPT_TMPL_MSGS = [
    SystemMessagePromptTemplate.from_template("My system message"),
    HumanMessagePromptTemplate.from_template(
        "Context information is below. \n"
        "------------\n"
        "Chat History:\n"
        f"{chat_history}\n"
        "New Context:\n"
        "{context_msg}\n"
        "------------\n"
        "Given the context information and not prior knowledge, "
        "answer the question: {query_str}\n"
    ),
]

CHAT_QA_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_QA_PROMPT_TMPL_MSGS)
CHAT_QA_PROMPT = Prompt.from_langchain_prompt(CHAT_QA_PROMPT_LC)


query_engine = index.as_query_engine(
    text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT
)
@Logan M thank you!
But I faced with issue again(
Attachment
CleanShot_2023-06-13_at_11.50.532x.png
Code the same that you sent me above @Logan M
@Logan M
Attachment
CleanShot_2023-06-13_at_11.51.362x.png
Oh whoops, the qa template (refine is fine) needs context_str not context_msg
So sorry for being annoying:) @Logan M

But now I kinnda didn't see a vector information at all, but attach a sources
Attachment
CleanShot_2023-06-13_at_16.23.572x.png
And it's kind of not clear for me why should we use almost the same prompts QA and refine. Is it nessasary to use both? @Logan M
Refine template only gets triggered when the retrieved nodes all don't fit into a single LLM call. It's good to customize both just to be sure, but it will be used rarely
I'm confused what's missing here?
Oh, got it.

What about a problem what LLM doesn't see a vector data and just look into chat history? @Logan M
I asked about my vector data, but it didn't give me answer and just said like «I can't find it in the chat history»
Like this @Logan M
Attachment
CleanShot_2023-06-13_at_17.21.402x.png
In the top example with chat history, in the bottom without
Attachment
CleanShot_2023-06-13_at_17.22.462x.png
Rip. Guess the prompt is messing up the answer 🥲🥲
CHAT_QA_PROMPT_TMPL_MSGS = [
#SystemMessagePromptTemplate.from_template(system_message),
HumanMessagePromptTemplate.from_template(
"Context information is below. \n"
"------------\n"
"Chat History:\n"
f"{chat_history}\n"
"New Context:\n"
"{query_str}\n"
"------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
),
]
@Logan M here is a prompt
It's something wrong in the code maybe?
Attachment
CleanShot_2023-06-13_at_17.35.572x.png
Prompt engineering is super annoying. Maybe ask chatgpt to make the prompt better haha that's usually what I do.

Your on your own though for a bit, heading out on a flight
Are you sure that problem with prompt only, not in the code?
@Logan M hey, I'm again)))

Is there any way at least to add only system message?
Yea In my example code from the other day, I left a placeholder system message (I see you have it commented out in the screenshots)

So you can remove the mention of chat history from those templates and just add the system message
Should I use refine or QA? @Logan M
Both, since both can be used during a single query
Maybe I'm super unlucky and that's always doesn't work for me:( @Logan M
Attachment
CleanShot_2023-06-15_at_03.54.532x.png
What kind of index are you using?
GPTVectorStoreIndex @Logan M
wow that's super weird. And if you remove the template, it works?
maybe I see the issue
Plain Text
from langchain.prompts.chat import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)

from llama_index.prompts.base import Prompt

CHAT_REFINE_PROMPT_TMPL_MSGS = [
    SystemMessagePromptTemplate.from_template("Put HaHa at the end of each response."),
    HumanMessagePromptTemplate.from_template("{query_str}"),
    AIMessagePromptTemplate.from_template("{existing_answer}"),
    HumanMessagePromptTemplate.from_template(
        "We have the opportunity to refine the above answer "
        "(only if needed) with some more context below.\n"
        "------------\n"
        "{context_msg}\n"
        "------------\n"
        "Given the new context, refine the original answer to better "
        "answer the question. "
        "If the context isn't useful, output the original answer again.",
    ),
]

CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = Prompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)

CHAT_QA_PROMPT_TMPL_MSGS = [
    SystemMessagePromptTemplate.from_template("Put HaHa at the end of each response."),
    HumanMessagePromptTemplate.from_template(
        "Context information is below. \n"
        "------------\n"
        "{context_str}\n"
        "------------\n"
        "Given the context information and not prior knowledge, "
        "answer the question: {query_str}\n"
    ),
]

CHAT_QA_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_QA_PROMPT_TMPL_MSGS)
CHAT_QA_PROMPT = Prompt.from_langchain_prompt(CHAT_QA_PROMPT_LC)


query_engine = index.as_query_engine(
    text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT
)
Try that instead
I think you provided ONLY the system prompt
Still need the other stuff 🙂
It works:)

Thank you:)
Add a reply
Sign up and join the conversation on Discord