llama_index
provides a function called create_llama_agent
that acts as a light wrapper around the initialize_agent
function in langchain
. This function is used to load an agent executor given a Llama Toolkit and LLM (Language Model). The create_llama_agent
function accepts an optional agent
parameter, which is a string specifying the agent type to use. If both agent
and agent_path
are None
, it will default to the zero-shot-react-description
agent type.create_llama_agent
:gpt_index.langchain_helpers.agents.create_llama_agent( toolkit: LlamaToolkit, llm: BaseLLM, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any ) -> AgentExecutor
llama_index
handles default langchain
prompts when using initialize_agent
. For more details, you may want to refer to the Llama Index documentation (https://gpt-index.readthedocs.io/en/latest/) or the GitHub repository (https://github.com/jerryjliu/llama_index).Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:create_llama_agent
wrapper function provided by llama_index
. To do this, you can pass the custom agent type as a string to the agent
parameter when calling the function. The available agent types are:create_llama_agent
:gpt_index.langchain_helpers.agents.create_llama_agent( toolkit: LlamaToolkit, llm: BaseLLM, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any ) -> AgentExecutor
agent_path
parameter.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:create_llama_agent
function. However, I can show you the function signature and parameters:gpt_index.langchain_helpers.agents.create_llama_agent( toolkit: LlamaToolkit, llm: BaseLLM, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any ) -> AgentExecutor
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:create_llama_chat_agent
function from the provided information extracts:from gpt_index.langchain_helpers.agents import create_llama_chat_agent from gpt_index.langchain_helpers.memory_wrapper import ConversationBufferMemory from gpt_index.langchain_helpers.llm import OpenAI memory = ConversationBufferMemory(memory_key="chat_history") llm = OpenAI(temperature=0) agent_chain = create_llama_chat_agent( toolkit, llm, memory=memory, verbose=True )
ConversationBufferMemory
is created to store the chat history, and an OpenAI
instance is created as the language model. The create_llama_chat_agent
function is then called with the toolkit
, llm
, and memory
parameters to create a Langchain chatbot agent.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:LlamaToolkit
using the IndexToolConfig
and GraphToolConfig
from the provided information extracts:from llama_index.langchain_helpers.agents import LlamaToolkit, IndexToolConfig, GraphToolConfig # Define toolkit index_configs = [] for y in range(2019, 2023): tool_config = IndexToolConfig( index=index_set[y], name=f"Vector Index {y}", description=f"useful for when you want to answer queries about the {y} SEC 10-K for Uber", index_query_kwargs={"similarity_top_k": 3}, tool_kwargs={"return_direct": True} ) index_configs.append(tool_config) graph_config = GraphToolConfig( graph=graph, name=f"Graph Index", description="useful for when you want to answer queries that require analyzing multiple SEC 10-K documents for Uber.", query_configs=query_configs, tool_kwargs={"return_direct": True} ) toolkit = LlamaToolkit( index_configs=index_configs, graph_configs=[graph_config] )
IndexToolConfig
objects are created for each index (corresponding to a given year), and a GraphToolConfig
object is created for the graph. These configurations are then combined into a LlamaToolkit
.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon: