llama_chat_agent
is an instance of an agent executor created using the create_llama_chat_agent
function from the GPT Index Langchain Integrations. It is a chatbot agent that has access to the LlamaToolkit and the specified language model (LLM). The agent can be used to interact with the chatbot and answer queries using the provided tools and data.llama_chat_agent
:memory = ConversationBufferMemory(memory_key="chat_history") llm = OpenAI(temperature=0) agent_chain = create_llama_chat_agent( toolkit, llm, memory=memory, verbose=True )
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:LLMPredictor
class from GPT Index. When you call the predict
or apredict
method on an instance of LLMPredictor
, it returns a tuple containing the predicted answer and the formatted prompt. You can access the formatted prompt by indexing the returned tuple.LLMPredictor
:from gpt_index.service_context.llm_predictor import LLMPredictor from gpt_index.prompts import Prompt # Create an instance of LLMPredictor llm_predictor = LLMPredictor() # Create a prompt prompt = Prompt("Your prompt text here") # Call the predict method and get the answer and formatted prompt answer, formatted_prompt = llm_predictor.predict(prompt) # Print the formatted prompt print(formatted_prompt)
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:Prompt
class or a derived class like QuestionAnswerPrompt
.GPTSimpleVectorIndex
and a custom QuestionAnswerPrompt
:from llama_index import QuestionAnswerPrompt, GPTSimpleVectorIndex, SimpleDirectoryReader # Load documents documents = SimpleDirectoryReader('data').load_data() # Define custom QuestionAnswerPrompt query_str = "What did the author do growing up?" QA_PROMPT_TMPL = ( "We have provided context information below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL) # Build GPTSimpleVectorIndex index = GPTSimpleVectorIndex.from_documents(documents) # Send the query with the custom prompt response = index.query(query_str, text_qa_template=QA_PROMPT) # Print the response and the formatted prompt print("Response:", response) print("Formatted Prompt:", QA_PROMPT.format(query_str=query_str))
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:context_str
is a variable used in the custom prompt template to represent the context information that will be provided to the LLM when querying the index. In the example provided earlier, the context_str
is a placeholder within the QA_PROMPT_TMPL
string, which is later used to create a QuestionAnswerPrompt
instance.index.query()
method, the GPT Index library automatically fills in the context_str
with the relevant context information from the documents in the index. This context information is then used by the LLM to generate a response to the query.# Define custom QuestionAnswerPrompt query_str = "What did the author do growing up?" QA_PROMPT_TMPL = ( "We have provided context information below. \n" "---------------------\n" "{{context_str}}" "\n---------------------\n" "Given this information, please answer the question: {{query_str}}\n" ) QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
{{context_str}}
placeholder within the QA_PROMPT_TMPL
string will be replaced with the actual context information when the query is sent to the index.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:{}
instead of {{}}
. Here's the corrected example code:# Define custom QuestionAnswerPrompt query_str = "What did the author do growing up?" QA_PROMPT_TMPL = ( "We have provided context information below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
index.query()
method, the GPT Index library should automatically fill in the context_str
with the relevant context information from the documents in the index.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon: