Find answers from the community

Updated 3 months ago

How do I see the value being passed to HumanMessagePromptTemplate?

How do I see the value being passed to HumanMessagePromptTemplate?
k
R
19 comments
Please stand by @Raz as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
The HumanMessagePromptTemplate is used in the context of customizing chat prompts in LlamaIndex. It is used to format the prompt into a string that represents a human message in a chat. The value being passed to HumanMessagePromptTemplate is a string that forms the template for the prompt.

Here is an example of how it is used:

Plain Text
from langchain.prompts.chat import HumanMessagePromptTemplate

chat_text_qa_msgs = [
    HumanMessagePromptTemplate.from_template(
        "Context information is below.\n"
        "---------------------\n"
        "{context_str}\n"
        "---------------------\n"
        "Given the context information and not prior knowledge, "
        "answer the question: {query_str}\n"
    ),
]


In this example, the string being passed to HumanMessagePromptTemplate.from_template is a template for the prompt. The placeholders {context_str} and {query_str} in the template will be replaced with the actual context and query when the prompt is used.

Please note that this is a customization of the default prompts, and you can modify the template string as per your requirements.

For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how do I pass previous conversation to index.as_query_engine
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the provided extracts, it seems that the index.as_query_engine() function does not directly accept a conversation history. However, you can initialize a conversation history and pass it to a ChatEngine object, which can then be used to handle the conversation context.

Here's an example of how you can do this:

Plain Text
from llama_index.prompts  import Prompt
from llama_index.llms import ChatMessage, MessageRole

# Define a custom prompt
custom_prompt = Prompt("""\
Given a conversation (between Human and Assistant) and a follow up message from Human, \
rewrite the message to be a standalone question that captures all relevant context \
from the conversation.

<Chat History> 
{chat_history}

<Follow Up Message>
{question}

<Standalone question>
""")

# Initialize a list of `ChatMessage` objects to represent the conversation history
custom_chat_history = [
    ChatMessage(
        role=MessageRole.USER, 
        content='Hello assistant, we are having a insightful discussion about Paul Graham today.'
    ), 
    ChatMessage(
        role=MessageRole.ASSISTANT, 
        content='Okay, sounds good.'
    )
]

# Get a query engine from the index
query_engine = index.as_query_engine()

# Create a chat engine with the query engine, custom prompt, and conversation history
chat_engine = CondenseQuestionChatEngine.from_defaults(
    query_engine=query_engine, 
    condense_question_prompt=custom_prompt,
    chat_history=custom_chat_history,
    verbose=True
)


In this example, the custom_chat_history variable represents the previous conversation. This is passed to the CondenseQuestionChatEngine when it is created. The CondenseQuestionChatEngine then uses this conversation history when generating responses.
For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai whats the diffrence between Chat Engine and Query Engine
Let me check the knowledge base for you. One moment! :books:
----------------------
A Query Engine and a Chat Engine are both components of LlamaIndex, but they serve different purposes and are used in different contexts.

Query Engine: A query engine is an end-to-end pipeline that allows you to ask a question over your data. It takes in a natural language query, and returns a response, along with reference context retrieved and passed to the LLM (Language Model). It's typically used for single-turn interactions where you ask a question and get an answer. Source ()

Chat Engine: A chat engine, on the other hand, is an end-to-end pipeline for having a conversation with your data. This means it handles multiple back-and-forth interactions instead of a single question & answer. It maintains the context of the conversation and uses it to generate responses. Source ()

In summary, if you're looking to have a single question answered, you'd use a Query Engine. If you're looking to have a conversation with multiple questions and answers, you'd use a Chat Engine.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai can I do a document Q&A with chat engine?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can perform a document Q&A with a Chat Engine in LlamaIndex. The Chat Engine is designed to handle multiple back-and-forth interactions and maintains the context of the conversation to generate responses. This means you can ask multiple questions about a document and the Chat Engine will keep track of the context to provide relevant answers.

Here's an example of how you might use a Chat Engine for a document Q&A:

Plain Text
from llama_index import VectorStoreIndex, SimpleDirectoryReader

# Load your documents
documents = SimpleDirectoryReader('data').load_data()

# Create an index from your documents
index = VectorStoreIndex.from_documents(documents)

# Create a chat engine from the index
chat_engine = index.as_chat_engine()

# Now you can ask questions about your documents
response = chat_engine.chat("What did the author do growing up?")
print(response)


In this example, the chat() method is used to ask a question about the documents. You can call this method multiple times to ask multiple questions, and the Chat Engine will maintain the context of the conversation.

For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how about adding past conversation to the document Q&A with chat_engine ?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can add past conversation to the document Q&A with a Chat Engine in LlamaIndex. The Chat Engine maintains the context of the conversation and uses it to generate responses. This means you can ask multiple questions about a document and the Chat Engine will keep track of the context to provide relevant answers.

Here's an example of how you might use a Chat Engine for a document Q&A with past conversation:

``python from llama_index import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms import ChatMessage, MessageRole from llama_index.chat_engine.condense_question import CondenseQuestionChatEngine from llama_index.prompts import Prompt # Load your documents documents = SimpleDirectoryReader('data').load_data() # Create an index from your documents index = VectorStoreIndex.from_documents(documents) # Define a custom prompt custom_prompt = Prompt("""\ Given a conversation (between Human and Assistant) and a follow up message from Human, \ rewrite the message to be a standalone question that captures all relevant context \ from the conversation. <Chat History> {chat_history} <Follow Up Message> {question} <Standalone question> """) # Initialize a list of ChatMessage` objects to represent the conversation history
custom_chat_history = [
ChatMessage(
role=MessageRole.USER,
content='Hello assistant, we are having a insightful discussion about Paul Graham today.'
),
ChatMessage(
role=MessageRole.ASSISTANT,
content='Okay, sounds good.'
)
]

Get a query engine from the index

query_engine = index.as_query_engine()

Create a chat engine with the query engine, custom prompt, and conversation history

chat_engine = CondenseQuestionChatEngine.from_defaults(
query_engine=query_engine,
condense_question_prompt=custom_prompt,
chat_history=custom_chat_history,
verbose=True
)

Now you can ask questions about your documents

response = chat_engine.chat("What did the author do growing up?")
print(response)
`` In this example, the custom_chat_history variable represents the previous conversation. This is passed to the CondenseQuestionChatEngine when it is created. The CondenseQuestionChatEngine then uses this conversation history when generating responses. For more details, you can refer to the **LlamaIndex documentation** (<https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/chat_engines/usage_pattern.html>). ---------------------- Note: you can ask me a follow up question by @ mentioning me again` :speech_balloon:
----------------------
@kapa.ai CondenseQuestionChatEngine seems to be ignoring the original query_engine index, any idea on what i can do?
Let me check the knowledge base for you. One moment! :books:
@Raz That's a great question, but unfortunately I am only capable of responding to a maximum of three questions per thread. Don't hesitate to create a new thread for any further questions you have though. I'm here to help!
Add a reply
Sign up and join the conversation on Discord