Find answers from the community

Updated 3 months ago

Hello, I am attempting to create a tool

Hello, I am attempting to create a tool class using BaseToolSpec. I am using Pydantic Fields and a docstring with args spelled out, but no matter what, it seems the agent will formulate the tool params like { input: "some ask", lang: "en" }. So it is ignoring the tool definition. I was simply following examples in docs and official llamaindex videos. Is there some up-to-date resource I can use as an example?
L
m
10 comments
Can you give an example of what your code looks like?
yep, it basically a mock right now:
class TaskSpec(BaseToolSpec):
spec_functions = ["do_task"]

def do_task(input: str = Field(
title= 'input',
description="Description of the task being performed, also serves as the unique key for this task"
) ) -> str:
"""
Perform a task and get the result

args:
input (str): The task description to perform, also serves as the unique key for this task
"""
return "I see a cat on the table!"

agent = ReActAgent.from_tools(
TaskSpec().to_tool_list(),
llm=llm,
verbose=True,
)

response = agent.chat("Perform a task of checking if the cat is on the table")
print(str(response))
Hmm, so, react will just use the name and description which seems to be populated properly

Plain Text
tools = TaskSpec().to_tool_list()
print(tools[0].metadata.name)
print(tools[0].metadata.description)


Outputs
Plain Text
do_task
do_task() -> str

        Perform a task and get the result

        args:
            input (str): The task description to perform, also serves as the unique key for this task


If you are using a less-than-capable LLM though, you'll have a tough time getting a reliable agent

IMO open-source LLMs tend to hallucinate a lot when make function calls like this
Ah, that's interesting and I was wondering if that might be part of the cause. I am using a small 3B model. I guess I may have to resort to functon routing based on input prior to the LLM being involved which is unfortunate.
I could also see if some of the newer 7B models fare better - I read the new llama 3 is as performant as GPT 3.5
also, can any other built-in agents work with a local LLM? I don't actually need/want the React agent as this won't be used directly in a web app
llama3 might be a bit better yes.

I think you'll find most open-source LLMs are a little difficult to use in tasks that require complex reasoning or anything that involves parsing outputs.

There are a few other custom agents in addition to react, like chain-of-abstraction and LATs. But tbh they will probably suffer from similar issues
You might have some luck with providing additional context to the react agent.

This string will get inserted into the overall react instructions

ReActAgent.from_tools(..., context="...")
Helpful, thanks!
Add a reply
Sign up and join the conversation on Discord