HumanMessagePromptTemplate
is used in the context of customizing chat prompts in LlamaIndex. It is used to format the prompt into a string that represents a human message in a chat. The value being passed to HumanMessagePromptTemplate
is a string that forms the template for the prompt.from langchain.prompts.chat import HumanMessagePromptTemplate chat_text_qa_msgs = [ HumanMessagePromptTemplate.from_template( "Context information is below.\n" "---------------------\n" "{context_str}\n" "---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {query_str}\n" ), ]
HumanMessagePromptTemplate.from_template
is a template for the prompt. The placeholders {context_str}
and {query_str}
in the template will be replaced with the actual context and query when the prompt is used.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:index.as_query_engine()
function does not directly accept a conversation history. However, you can initialize a conversation history and pass it to a ChatEngine
object, which can then be used to handle the conversation context.from llama_index.prompts import Prompt from llama_index.llms import ChatMessage, MessageRole # Define a custom prompt custom_prompt = Prompt("""\ Given a conversation (between Human and Assistant) and a follow up message from Human, \ rewrite the message to be a standalone question that captures all relevant context \ from the conversation. <Chat History> {chat_history} <Follow Up Message> {question} <Standalone question> """) # Initialize a list of `ChatMessage` objects to represent the conversation history custom_chat_history = [ ChatMessage( role=MessageRole.USER, content='Hello assistant, we are having a insightful discussion about Paul Graham today.' ), ChatMessage( role=MessageRole.ASSISTANT, content='Okay, sounds good.' ) ] # Get a query engine from the index query_engine = index.as_query_engine() # Create a chat engine with the query engine, custom prompt, and conversation history chat_engine = CondenseQuestionChatEngine.from_defaults( query_engine=query_engine, condense_question_prompt=custom_prompt, chat_history=custom_chat_history, verbose=True )
custom_chat_history
variable represents the previous conversation. This is passed to the CondenseQuestionChatEngine
when it is created. The CondenseQuestionChatEngine
then uses this conversation history when generating responses.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:from llama_index import VectorStoreIndex, SimpleDirectoryReader # Load your documents documents = SimpleDirectoryReader('data').load_data() # Create an index from your documents index = VectorStoreIndex.from_documents(documents) # Create a chat engine from the index chat_engine = index.as_chat_engine() # Now you can ask questions about your documents response = chat_engine.chat("What did the author do growing up?") print(response)
chat()
method is used to ask a question about the documents. You can call this method multiple times to ask multiple questions, and the Chat Engine will maintain the context of the conversation.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:``python
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms import ChatMessage, MessageRole
from llama_index.chat_engine.condense_question import CondenseQuestionChatEngine
from llama_index.prompts import Prompt
# Load your documents
documents = SimpleDirectoryReader('data').load_data()
# Create an index from your documents
index = VectorStoreIndex.from_documents(documents)
# Define a custom prompt
custom_prompt = Prompt("""\
Given a conversation (between Human and Assistant) and a follow up message from Human, \
rewrite the message to be a standalone question that captures all relevant context \
from the conversation.
<Chat History>
{chat_history}
<Follow Up Message>
{question}
<Standalone question>
""")
# Initialize a list of
ChatMessage` objects to represent the conversation history``
In this example, the
custom_chat_history variable represents the previous conversation. This is passed to the
CondenseQuestionChatEngine when it is created. The
CondenseQuestionChatEngine then uses this conversation history when generating responses.
For more details, you can refer to the **LlamaIndex documentation** (<https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/chat_engines/usage_pattern.html>).
----------------------
Note: you can ask me a follow up question by @ mentioning me again` :speech_balloon: