Find answers from the community

Updated 3 months ago

how do I use CondenseQuestionChatEngine with index.as_query_engine to question the documen

how do I use CondenseQuestionChatEngine with index.as_query_engine to question the documents?
k
R
9 comments
Please stand by @Raz as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To use CondenseQuestionChatEngine with index.as_query_engine() to question the documents, you need to follow these steps:

  1. First, you need to create a query_engine from your index. This engine will be used to query your documents.
Plain Text
query_engine = index.as_query_engine()


  1. Then, you can create a CondenseQuestionChatEngine using the from_defaults method. This method allows you to configure the chat engine with default parameters. You can also specify a custom prompt and chat history if needed.
Plain Text
from llama_index.prompts  import Prompt
from llama_index.llms import ChatMessage, MessageRole

custom_prompt = Prompt("""\
Given a conversation (between Human and Assistant) and a follow up message from Human, \
rewrite the message to be a standalone question that captures all relevant context \
from the conversation.

<Chat History> 
{chat_history}

<Follow Up Message>
{question}

<Standalone question>
""")

# list of `ChatMessage` objects
custom_chat_history = [
    ChatMessage(
        role=MessageRole.USER, 
        content='Hello assistant, we are having a insightful discussion about Paul Graham today.'
    ), 
    ChatMessage(
        role=MessageRole.ASSISTANT, 
        content='Okay, sounds good.'
    )
]

chat_engine = CondenseQuestionChatEngine.from_defaults(
    query_engine=query_engine, 
    condense_question_prompt=custom_prompt,
    chat_history=custom_chat_history,
    verbose=True
)


  1. Finally, you can use the chat or stream_chat method of the CondenseQuestionChatEngine to ask questions to the documents. The stream_chat method is used for streaming responses.
Plain Text
response = chat_engine.chat("What did Paul Graham do after YC?")


or

Plain Text
response = chat_engine.stream_chat("What did Paul Graham do after YC?")
for token in response.response_gen:
    print(token, end="")
Please note that you need to replace "What did Paul Graham do after YC?" with your own question.

Sources:
  • CondenseQuestionChatEngine API Reference ()
  • Chat Engine Usage Pattern ()
  • Chat Engine - Condense Question Mode Example ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how do i get CondenseQuestionChatEngine to return the filename of the query result
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the provided extracts, there's no direct mention of how to get the filename of the query result using CondenseQuestionChatEngine. The chat engine is designed to return responses based on the queries, not the metadata of the documents such as filenames.

However, if the filename or any other metadata is stored as part of the document data in the index, you might be able to retrieve it by customizing the query engine or the chat engine. This would likely involve modifying the source code of LlamaIndex or extending its classes with your own code.

For more detailed information or assistance, you might want to refer to the LlamaIndex documentation () or consider reaching out to the maintainers of the project.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai can we get metadata from CondenseQuestionChatEngine?
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the provided extracts, there's no direct mention of retrieving metadata using CondenseQuestionChatEngine. The chat engine is designed to return responses based on the queries, not the metadata of the documents.

However, if the metadata is stored as part of the document data in the index, you might be able to retrieve it by customizing the query engine or the chat engine. This would likely involve modifying the source code of LlamaIndex or extending its classes with your own code.

For more detailed information or assistance, you might want to refer to the LlamaIndex documentation () or consider reaching out to the maintainers of the project.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord