condense_plus_context
chat mode, you can use the from_defaults
method of the CondensePlusContextChatEngine
class. This method allows you to specify custom prompts when initializing the chat engine.from llama_index.core.chat_engine import CondensePlusContextChatEngine from llama_index.core.base.base_query_engine import BaseQueryEngine from llama_index.core.prompts import BasePromptTemplate from llama_index.core.memory.types import BaseMemory, ChatMemoryBuffer from llama_index.core.llms.llm import LLM from llama_index.core.base.llms.types import ChatMessage from llama_index.core.service_context import ServiceContext # Initialize your custom prompts context_prompt = "Your custom context prompt" condense_prompt = "Your custom condense prompt" # Initialize the chat engine with custom prompts chat_engine = CondensePlusContextChatEngine.from_defaults( query_engine=BaseQueryEngine(), context_prompt=context_prompt, condense_prompt=condense_prompt, chat_history=[ChatMessage()], memory=BaseMemory(), memory_cls=ChatMemoryBuffer, service_context=ServiceContext(), verbose=True, system_prompt="Your custom system prompt", llm=LLM() )
BaseQueryEngine
, BaseMemory
, ChatMemoryBuffer
, ServiceContext
, and LLM
.EOT
(End Of Text) in the prompts. The LlamaIndex library handles the addition of these special tokens internally. best
chat mode. This mode uses the openai
mode if you are using an OpenAI model that supports the latest function calling API, otherwise, it uses the react
mode. best
mode:from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-4") data = SimpleDirectoryReader(input_dir="./data/your_pdf_directory/").load_data() index = VectorStoreIndex.from_documents(data) chat_engine = index.as_chat_engine(chat_mode="best", llm=llm, verbose=True)
"./data/your_pdf_directory/"
with the path to your directory containing the PDF data.best
chat mode even if you're using a Llama model. The best
chat mode in LlamaIndex is designed to select the best chat engine based on the current LLM (Language Learning Model). best
chat mode corresponds to OpenAIAgent
if you're using an OpenAI model that supports function calling API, otherwise, it corresponds to ReActAgent
.best
chat mode will use the OpenAIAgent
. If not, it will use the ReActAgent
. This ensures that you're always using the most suitable chat engine for your model, which can help to optimize the performance of your chatbot.index.as_chat_engine()
, you can change the prompts by passing them as arguments to the function. Here's an example:from llama_index.core import PromptTemplate # Define your custom prompts custom_prompt = PromptTemplate( """\ Given a conversation (between Human and Assistant) and a follow up message from Human, \ rewrite the message to be a standalone question that captures all relevant context \ from the conversation. <Chat History> {chat_history} <Follow Up Message> {question} <Standalone question> """ ) # Use the custom prompt when creating the chat engine chat_engine = index.as_chat_engine(condense_question_prompt=custom_prompt)
custom_prompt
is a PromptTemplate
object that defines the new prompt. You can replace the string inside PromptTemplate
with your own custom prompt.as_chat_engine()
depend on the specific chat engine you're using. For more details, you can refer to the LlamaIndex documentation ().PromptTemplate
is used in the index.as_chat_engine()
method to define the structure of the prompts that will be used in the chat engine. It provides a way to dynamically generate prompts based on the current state of the conversation.from_defaults()
method is a class method that is used to create an instance of a chat engine with default parameters. It allows you to customize the chat engine by passing in your own parameters, such as a custom prompt, but it doesn't require a PromptTemplate
.index.as_chat_engine()
is a higher-level function that is designed to make it easy to create a chat engine with a specific configuration. It uses PromptTemplate
to provide a flexible way to define the prompts that will be used in the chat engine.from_defaults()
is a lower-level method that provides more control over the creation of the chat engine, but it requires you to manually specify all of the parameters.get_system_message()
method in the Neo4jQueryToolSpec
class if you're using the Neo4j tool, or you can define a ChatMessage
with the role set to MessageRole.SYSTEM
if you're creating a chat prompt template.Neo4jQueryToolSpec
class:from llama_index.tools.neo4j import Neo4jQueryToolSpec class CustomNeo4jQueryToolSpec(Neo4jQueryToolSpec): def get_system_message(self): return "Your custom system message" tool_spec = CustomNeo4jQueryToolSpec() system_message = tool_spec.get_system_message()
CustomNeo4jQueryToolSpec
is a subclass of Neo4jQueryToolSpec
that overrides the get_system_message()
method to return a custom system message.ChatMessage
:from llama_index.core.llms import ChatMessage, MessageRole from llama_index.core import ChatPromptTemplate chat_messages = [ ChatMessage( role=MessageRole.SYSTEM, content="Your custom system message", ), ChatMessage(role=MessageRole.USER, content="Your user message"), ] chat_prompt_template = ChatPromptTemplate(chat_messages)
ChatMessage
has the role set to MessageRole.SYSTEM
, which means it will be used as the system message.