Find answers from the community

Updated 5 months ago

What is FunctionCallingAgentWorker and how different is it from ReAct Agent ?

@kapa.ai What is FunctionCallingAgentWorker and how different is it from ReAct Agent ?
k
p
30 comments
Please stand by @praveenks30#120294 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
FunctionCallingAgentWorker and ReActAgent are both components of the LlamaIndex system, but they serve different purposes.

FunctionCallingAgentWorker is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. The FunctionCallingAgentWorker can be configured to allow parallel function calls, which means it can perform multiple tasks simultaneously. Here is an example of how to create a FunctionCallingAgentWorker:

Plain Text
from llama_index.core.agent import FunctionCallingAgentWorker

agent_worker = FunctionCallingAgentWorker.from_tools(
    [multiply_tool, add_tool],
    llm=llm,
    verbose=True,
    allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()

Source ()

On the other hand, ReActAgent is a specific type of agent that subclasses AgentRunner with a ReActAgentWorker. It is used for more complex tasks and can be customized with various parameters such as tools, memory, max iterations, and more. Here is an example of how to create a ReActAgent:

Plain Text
from llama_index.core.agent import AgentRunner
from llama_index.agent.openai import OpenAIAgentWorker

# construct OpenAIAgent from tools
openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True)
agent = AgentRunner(openai_step_engine)

Source ()
In summary, while both FunctionCallingAgentWorker and ReActAgent are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai so, what are the scenarios to use these agents ? where to use FunctionAgent and where to use ReAct agent ?
Let me check the knowledge base for you. One moment! :books:
----------------------
The choice between FunctionCallingAgentWorker and ReActAgent depends on the complexity of the tasks you want your agent to perform.

FunctionCallingAgentWorker is typically used when you want your agent to perform simple tasks such as calling functions or querying data. It can be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to use FunctionCallingAgentWorker:

Plain Text
from llama_index.core.agent import FunctionCallingAgentWorker

agent_worker = FunctionCallingAgentWorker.from_tools(
    [multiply_tool, add_tool],
    llm=llm,
    verbose=True,
    allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()

Source ()

On the other hand, ReActAgent is used for more complex tasks. It combines chain-of-thought reasoning with action planning, enabling LLMs to create reasoning tracks and task-specific actions. It is particularly useful when you need your agent to perform complex reasoning tasks, interact with multiple tools, or maintain a memory of past interactions. Here is an example of how to use ReActAgent:

Plain Text
from llama_index.core.agent import AgentRunner
from llama_index.agent.openai import OpenAIAgentWorker

# construct OpenAIAgent from tools
openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True)
agent = AgentRunner(openai_step_engine)

Source ()
In terms of use cases, agents can be used for a wide range of tasks. For example, you could build a context-augmented research assistant over your data that can answer complex research tasks (Agentic RAG), a "text-to-SQL assistant" that can interact with a structured database (SQL Agent), an agent that can operate over common workflow tools like email, calendar (Workflow Assistant), or an agent that can operate over code (Coding Assistant). Source ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai could you please write a code where FunctionAgentCallingWorker may not work and Agent worker would work ?
Let me check the knowledge base for you. One moment! :books:
----------------------
I'm sorry, but the provided knowledge sources do not contain specific examples where FunctionCallingAgentWorker fails and AgentWorker works. Both FunctionCallingAgentWorker and AgentWorker are designed to handle different types of tasks and their usage depends on the specific requirements of your project.

FunctionCallingAgentWorker is typically used for simpler tasks where you need to call specific functions or tools. On the other hand, AgentWorker (used in the context of ReActAgent) is designed for more complex tasks that involve chain-of-thought reasoning and action planning.

If you have a specific scenario or task in mind, I would recommend referring to the LlamaIndex documentation or the specific documentation for FunctionCallingAgentWorker and AgentWorker to understand their capabilities and limitations.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Please provide me details about the default_vectorstore.json, docstore.json, vector_store.json
Let me check the knowledge base for you. One moment! :books:
----------------------
I'm sorry, but the provided knowledge sources do not contain specific information about default_vectorstore.json, docstore.json, or vector_store.json.

However, based on the context provided in the knowledge sources, we can infer that these files are likely used to persist data in LlamaIndex.

For example, docstore.json is mentioned in the context of saving and loading data with a vector store. The code snippet provided suggests that docstore.json is used to persist the document store:

Plain Text
storage_context.docstore.persist("./docstore.json")

Source ()

As for default_vectorstore.json and vector_store.json, while they are not explicitly mentioned in the provided knowledge sources, it's reasonable to assume that they are used to persist vector stores, given the naming convention and the context provided.

For more specific details about these files and how they are used in LlamaIndex, I would recommend referring to the LlamaIndex documentation or the specific documentation for persisting data in LlamaIndex.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai what is a sentence reranker ?
Let me check the knowledge base for you. One moment! :books:
----------------------
A sentence reranker, in the context of LlamaIndex, is a component that reorders the nodes (or sentences) based on their relevance to a given query. This process is known as reranking. The reranker can be configured to return a specific number of top nodes after reranking. If this number remains the same as the original number of nodes, the reranker will only reorder the nodes and not change the number of nodes. Otherwise, it will reorder the nodes and return the top n nodes.

There are multiple classes of reranking algorithms offered by LlamaIndex, such as SentenceTransformerRerank, CohereRerank, and LLMRerank. These rerankers use different models and techniques to rerank the nodes.

Here is an example of how to use the SentenceTransformerRerank:

Plain Text
from llama_index.core.postprocessor import SentenceTransformerRerank

rerank = SentenceTransformerRerank(
    model="cross-encoder/ms-marco-MiniLM-L-2-v2", top_n=3
)

Source ()

And here is an example of how to use the CohereRerank:

Plain Text
reranker = CohereRerank(top_n=10)

Source ()

In both examples, top_n is the number of nodes that will be returned after reranking. The model parameter in SentenceTransformerRerank specifies the model to be used for reranking.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How to use Ollama's tool support with ReAct agent ?
Let me check the knowledge base for you. One moment! :books:
----------------------
To use Ollama's tool support with a ReAct agent, you first need to create the tools you want to use. In the provided knowledge sources, there are examples of creating QueryEngineTool instances for searching over specific data. Here's an example:

Plain Text
drake_tool = QueryEngineTool(
    drake_index.as_query_engine(),
    metadata=ToolMetadata(
        name="drake_search",
        description="Useful for searching over Drake's life.",
    ),
)

kendrick_tool = QueryEngineTool(
    kendrick_index.as_query_engine(),
    metadata=ToolMetadata(
        name="kendrick_search",
        description="Useful for searching over Kendrick's life.",
    ),
)

query_engine_tools = [drake_tool, kendrick_tool]

Source ()

Once you have your tools, you can create a ReAct agent using these tools:

Plain Text
agent = ReActAgent.from_tools(
    query_engine_tools,  ## TODO: define query tools
    llm=llm_replicate,
    verbose=True,
)

Source ()

In this example, llm_replicate is the language model you're using, and verbose=True means the agent will print out additional information about its operation. The query_engine_tools is the list of tools the agent will use.

Please note that you'll need to replace drake_index and kendrick_index with the actual indexes you want to use, and llm_replicate with the actual language model.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai can I use FunctionTool as well ?
Let me check the knowledge base for you. One moment! :books:
----------------------
Yes, you can use FunctionTool with a ReAct agent. FunctionTool is a wrapper around any existing function, allowing you to easily convert any user-defined function into a tool that can be used by the agent. Here's an example of how to create a FunctionTool:

Plain Text
from llama_index.core.tools import FunctionTool

def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b

multiply_tool = FunctionTool.from_defaults(fn=multiply)

Source ()

Once you have your FunctionTool, you can include it in the list of tools when creating a ReAct agent:

Plain Text
tools = [multiply_tool]  # Add other tools as needed
agent = ReActAgent.from_tools(tools, llm=llm, verbose=True)

In this example, llm is the language model you're using. Please replace it with the actual language model you want to use.

Remember that the function you're wrapping with FunctionTool needs to be defined with the correct input and output types, and should include a docstring that describes what the function does. This information is used by the agent to understand how to use the tool.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai How is it different from FunctionCallingAgentWorker ?
Let me check the knowledge base for you. One moment! :books:
----------------------
FunctionTool and FunctionCallingAgentWorker are both components of the LlamaIndex system, but they serve different purposes and are used in different contexts.

FunctionTool is a wrapper around any existing function, allowing you to easily convert any user-defined function into a tool that can be used by an agent. It's used to encapsulate a function and its associated metadata so that it can be used as a tool within the LlamaIndex system. Here's an example of how to create a FunctionTool:

Plain Text
from llama_index.core.tools import FunctionTool

def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b

multiply_tool = FunctionTool.from_defaults(fn=multiply)

Source ()

On the other hand, FunctionCallingAgentWorker is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to create a FunctionCallingAgentWorker:

Plain Text
from llama_index.core.agent import FunctionCallingAgentWorker

agent_worker = FunctionCallingAgentWorker.from_tools(
    [multiply_tool, add_tool],
    llm=llm,
    verbose=True,
    allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()

Source ()
In summary, while both FunctionTool and FunctionCallingAgentWorker are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities. FunctionTool is used to encapsulate a function as a tool, while FunctionCallingAgentWorker is used to create an agent that can use these tools.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai What's the difference between ReactAgent and FunctionCallingAgentWorker.. does functioncallingagentworker will only work with the models which support function calling/tool support and react agent with all the models ?
Let me check the knowledge base for you. One moment! :books:
----------------------
ReActAgent and FunctionCallingAgentWorker are both components of the LlamaIndex system, but they serve different purposes and are used in different contexts.

ReActAgent is a specific type of agent that combines chain-of-thought reasoning with action planning, enabling LLMs to create reasoning tracks and task-specific actions. It is particularly useful when you need your agent to perform complex reasoning tasks, interact with multiple tools, or maintain a memory of past interactions. Here is an example of how to use ReActAgent:

Plain Text
from llama_index.core.agent import AgentRunner
from llama_index.agent.openai import OpenAIAgentWorker

# construct OpenAIAgent from tools
openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True)
agent = AgentRunner(openai_step_engine)

Source ()

On the other hand, FunctionCallingAgentWorker is a component that allows the model to use function calling capabilities. It is used to create an agent worker from a set of tools. This worker can then be used to create an agent that can perform tasks such as reasoning and querying. Here is an example of how to create a FunctionCallingAgentWorker:

Plain Text
from llama_index.core.agent import FunctionCallingAgentWorker

agent_worker = FunctionCallingAgentWorker.from_tools(
    [multiply_tool, add_tool],
    llm=llm,
    verbose=True,
    allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()

Source ()
In terms of model support, FunctionCallingAgentWorker is designed to work with models that support function calling/tool support. This means it can call functions or use tools that are encapsulated as FunctionTool or QueryEngineTool instances.

ReActAgent, on the other hand, is designed to work with any model that supports the ReAct protocol, which includes the ability to reason about and plan actions. This means it can work with a wider range of models, not just those that support function calling/tool support.

In summary, while both ReActAgent and FunctionCallingAgentWorker are used to create agents in LlamaIndex, they are used for different purposes and have different capabilities. ReActAgent is used for complex reasoning and action planning, while FunctionCallingAgentWorker is used for calling functions or using tools.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord