FunctionCallingAgentWorker
and ReActAgent
are both components of the LlamaIndex system, but they serve different purposes.FunctionCallingAgentWorker
is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. The FunctionCallingAgentWorker
can be configured to allow parallel function calls, which means it can perform multiple tasks simultaneously. Here is an example of how to create a FunctionCallingAgentWorker
:from llama_index.core.agent import FunctionCallingAgentWorker agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=False, ) agent = agent_worker.as_agent()
ReActAgent
is a specific type of agent that subclasses AgentRunner
with a ReActAgentWorker
. It is used for more complex tasks and can be customized with various parameters such as tools, memory, max iterations, and more. Here is an example of how to create a ReActAgent
:from llama_index.core.agent import AgentRunner from llama_index.agent.openai import OpenAIAgentWorker # construct OpenAIAgent from tools openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True) agent = AgentRunner(openai_step_engine)
FunctionCallingAgentWorker
and ReActAgent
are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities.FunctionCallingAgentWorker
and ReActAgent
depends on the complexity of the tasks you want your agent to perform.FunctionCallingAgentWorker
is typically used when you want your agent to perform simple tasks such as calling functions or querying data. It can be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to use FunctionCallingAgentWorker
:from llama_index.core.agent import FunctionCallingAgentWorker agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=False, ) agent = agent_worker.as_agent()
ReActAgent
is used for more complex tasks. It combines chain-of-thought reasoning with action planning, enabling LLMs to create reasoning tracks and task-specific actions. It is particularly useful when you need your agent to perform complex reasoning tasks, interact with multiple tools, or maintain a memory of past interactions. Here is an example of how to use ReActAgent
:from llama_index.core.agent import AgentRunner from llama_index.agent.openai import OpenAIAgentWorker # construct OpenAIAgent from tools openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True) agent = AgentRunner(openai_step_engine)
FunctionCallingAgentWorker
fails and AgentWorker
works. Both FunctionCallingAgentWorker
and AgentWorker
are designed to handle different types of tasks and their usage depends on the specific requirements of your project.FunctionCallingAgentWorker
is typically used for simpler tasks where you need to call specific functions or tools. On the other hand, AgentWorker
(used in the context of ReActAgent
) is designed for more complex tasks that involve chain-of-thought reasoning and action planning.FunctionCallingAgentWorker
and AgentWorker
to understand their capabilities and limitations.default_vectorstore.json
, docstore.json
, or vector_store.json
. docstore.json
is mentioned in the context of saving and loading data with a vector store. The code snippet provided suggests that docstore.json
is used to persist the document store:storage_context.docstore.persist("./docstore.json")
default_vectorstore.json
and vector_store.json
, while they are not explicitly mentioned in the provided knowledge sources, it's reasonable to assume that they are used to persist vector stores, given the naming convention and the context provided.SentenceTransformerRerank
, CohereRerank
, and LLMRerank
. These rerankers use different models and techniques to rerank the nodes.SentenceTransformerRerank
:from llama_index.core.postprocessor import SentenceTransformerRerank rerank = SentenceTransformerRerank( model="cross-encoder/ms-marco-MiniLM-L-2-v2", top_n=3 )
CohereRerank
:reranker = CohereRerank(top_n=10)
top_n
is the number of nodes that will be returned after reranking. The model
parameter in SentenceTransformerRerank
specifies the model to be used for reranking.QueryEngineTool
instances for searching over specific data. Here's an example:drake_tool = QueryEngineTool( drake_index.as_query_engine(), metadata=ToolMetadata( name="drake_search", description="Useful for searching over Drake's life.", ), ) kendrick_tool = QueryEngineTool( kendrick_index.as_query_engine(), metadata=ToolMetadata( name="kendrick_search", description="Useful for searching over Kendrick's life.", ), ) query_engine_tools = [drake_tool, kendrick_tool]
agent = ReActAgent.from_tools( query_engine_tools, ## TODO: define query tools llm=llm_replicate, verbose=True, )
llm_replicate
is the language model you're using, and verbose=True
means the agent will print out additional information about its operation. The query_engine_tools
is the list of tools the agent will use.drake_index
and kendrick_index
with the actual indexes you want to use, and llm_replicate
with the actual language model.FunctionTool
with a ReAct agent. FunctionTool
is a wrapper around any existing function, allowing you to easily convert any user-defined function into a tool that can be used by the agent. Here's an example of how to create a FunctionTool
:from llama_index.core.tools import FunctionTool def multiply(a: int, b: int) -> int: """Multiply two integers and returns the result integer""" return a * b multiply_tool = FunctionTool.from_defaults(fn=multiply)
FunctionTool
, you can include it in the list of tools when creating a ReAct agent:tools = [multiply_tool] # Add other tools as needed agent = ReActAgent.from_tools(tools, llm=llm, verbose=True)
llm
is the language model you're using. Please replace it with the actual language model you want to use.FunctionTool
needs to be defined with the correct input and output types, and should include a docstring that describes what the function does. This information is used by the agent to understand how to use the tool.FunctionTool
and FunctionCallingAgentWorker
are both components of the LlamaIndex system, but they serve different purposes and are used in different contexts.FunctionTool
is a wrapper around any existing function, allowing you to easily convert any user-defined function into a tool that can be used by an agent. It's used to encapsulate a function and its associated metadata so that it can be used as a tool within the LlamaIndex system. Here's an example of how to create a FunctionTool
:from llama_index.core.tools import FunctionTool def multiply(a: int, b: int) -> int: """Multiply two integers and returns the result integer""" return a * b multiply_tool = FunctionTool.from_defaults(fn=multiply)
FunctionCallingAgentWorker
is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to create a FunctionCallingAgentWorker
:from llama_index.core.agent import FunctionCallingAgentWorker agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=False, ) agent = agent_worker.as_agent()
FunctionTool
and FunctionCallingAgentWorker
are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities. FunctionTool
is used to encapsulate a function as a tool, while FunctionCallingAgentWorker
is used to create an agent that can use these tools.ReActAgent
and FunctionCallingAgentWorker
are both components of the LlamaIndex system, but they serve different purposes and are used in different contexts.ReActAgent
is a specific type of agent that combines chain-of-thought reasoning with action planning, enabling LLMs to create reasoning tracks and task-specific actions. It is particularly useful when you need your agent to perform complex reasoning tasks, interact with multiple tools, or maintain a memory of past interactions. Here is an example of how to use ReActAgent
:from llama_index.core.agent import AgentRunner from llama_index.agent.openai import OpenAIAgentWorker # construct OpenAIAgent from tools openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True) agent = AgentRunner(openai_step_engine)
FunctionCallingAgentWorker
is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to create a FunctionCallingAgentWorker
:from llama_index.core.agent import FunctionCallingAgentWorker agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=False, ) agent = agent_worker.as_agent()
FunctionCallingAgentWorker
is designed to work with models that support function calling/tool support. This means it can call functions or use tools that are encapsulated as FunctionTool
or QueryEngineTool
instances. ReActAgent
, on the other hand, is designed to work with any model that supports the ReAct protocol, which includes the ability to reason about and plan actions. This means it can work with a wider range of models, not just those that support function calling/tool support.ReActAgent
and FunctionCallingAgentWorker
are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities. ReActAgent
is used for complex reasoning and action planning, while FunctionCallingAgentWorker
is used for calling functions or using tools.