@Logan M What about storing context for user messages? Should I have a list of previous context? Or how will it techincally work?
Yea probably just a list of messages that you save to disk (maybe a pickle?). If you use the react chat agent, you can use the memory modules from langchain too
Up to you how to best do it 🙂
Which agent / module do you recommend to use for chat bot? @Logan M
Up to your experience tbh. Everything has it's pros and cons lol
I haven't played around with any enoug to have a strong opinion, but right now toay I'd probably choose the react agent
Probably I did something wrong:( @Logan M
AttributeError: 'list' object has no attribute 'partial_format'
I saw problem with variables but still have the same issue @Logan M
oh, you missed two steps with the template, easy fix 🙂
CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = RefinePrompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
Maybe I missed something important about logic @Logan M
Because it didn't see a previous context
Yea, you'll have to update the prompts with chat history (as well as add the system prompt, if you want, using SystemMessagePromptTemplate)
For some reason it didn't change at all:( Just empty @Logan M
Those variables (existing answer, context_msg) are actually filled in automatically by llama-index
Give me one sec, I'll make an example for you
Sure, will wait:)
Thank you so much in advance:) @Logan M
lol might take a sec, got stuck in a meeting lol
No prob:) @Logan M
Not urgent, but I’m really curious about solution😁
Here's a very basic example how how this might work
from langchain.prompts.chat import (
AIMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from llama_index.prompts.base import Prompt
# you'll want to limit the length of this!
chat_history = (
"Human: Hello, who are you?\n"
"AI: I am doing well, how are you?\n"
)
CHAT_REFINE_PROMPT_TMPL_MSGS = [
SystemMessagePromptTemplate.from_template("My system message"),
HumanMessagePromptTemplate.from_template("{query_str}"),
AIMessagePromptTemplate.from_template("{existing_answer}"),
HumanMessagePromptTemplate.from_template(
"We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"Chat History:\n"
f"{chat_history}\n"
"New Context:\n"
"{context_msg}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, output the original answer again.",
),
]
CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = Prompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
CHAT_QA_PROMPT_TMPL_MSGS = [
SystemMessagePromptTemplate.from_template("My system message"),
HumanMessagePromptTemplate.from_template(
"Context information is below. \n"
"------------\n"
"Chat History:\n"
f"{chat_history}\n"
"New Context:\n"
"{context_msg}\n"
"------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
),
]
CHAT_QA_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_QA_PROMPT_TMPL_MSGS)
CHAT_QA_PROMPT = Prompt.from_langchain_prompt(CHAT_QA_PROMPT_LC)
query_engine = index.as_query_engine(
text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT
)
But I faced with issue again(
Code the same that you sent me above @Logan M
Oh whoops, the qa template (refine is fine) needs context_str
not context_msg
So sorry for being annoying:) @Logan M
But now I kinnda didn't see a vector information at all, but attach a sources
And it's kind of not clear for me why should we use almost the same prompts QA and refine. Is it nessasary to use both? @Logan M
Refine template only gets triggered when the retrieved nodes all don't fit into a single LLM call. It's good to customize both just to be sure, but it will be used rarely
I'm confused what's missing here?
Oh, got it.
What about a problem what LLM doesn't see a vector data and just look into chat history? @Logan M
I asked about my vector data, but it didn't give me answer and just said like «I can't find it in the chat history»
In the top example with chat history, in the bottom without
Rip. Guess the prompt is messing up the answer 🥲🥲
CHAT_QA_PROMPT_TMPL_MSGS = [
#SystemMessagePromptTemplate.from_template(system_message),
HumanMessagePromptTemplate.from_template(
"Context information is below. \n"
"------------\n"
"Chat History:\n"
f"{chat_history}\n"
"New Context:\n"
"{query_str}\n"
"------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
),
]
@Logan M here is a prompt
It's something wrong in the code maybe?
Prompt engineering is super annoying. Maybe ask chatgpt to make the prompt better haha that's usually what I do.
Your on your own though for a bit, heading out on a flight
Are you sure that problem with prompt only, not in the code?
@Logan M hey, I'm again)))
Is there any way at least to add only system message?
Yea In my example code from the other day, I left a placeholder system message (I see you have it commented out in the screenshots)
So you can remove the mention of chat history from those templates and just add the system message
Should I use refine or QA? @Logan M
Both, since both can be used during a single query
Maybe I'm super unlucky and that's always doesn't work for me:( @Logan M
What kind of index are you using?
GPTVectorStoreIndex @Logan M
wow that's super weird. And if you remove the template, it works?
from langchain.prompts.chat import (
AIMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from llama_index.prompts.base import Prompt
CHAT_REFINE_PROMPT_TMPL_MSGS = [
SystemMessagePromptTemplate.from_template("Put HaHa at the end of each response."),
HumanMessagePromptTemplate.from_template("{query_str}"),
AIMessagePromptTemplate.from_template("{existing_answer}"),
HumanMessagePromptTemplate.from_template(
"We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, output the original answer again.",
),
]
CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS)
CHAT_REFINE_PROMPT = Prompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC)
CHAT_QA_PROMPT_TMPL_MSGS = [
SystemMessagePromptTemplate.from_template("Put HaHa at the end of each response."),
HumanMessagePromptTemplate.from_template(
"Context information is below. \n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
),
]
CHAT_QA_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_QA_PROMPT_TMPL_MSGS)
CHAT_QA_PROMPT = Prompt.from_langchain_prompt(CHAT_QA_PROMPT_LC)
query_engine = index.as_query_engine(
text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT
)
I think you provided ONLY the system prompt
Still need the other stuff 🙂