Find answers from the community

Updated 2 months ago

Hi, how to stop an agent from making up

Hi, how to stop an agent from making up a "nice" answer and returning just a rough response from the tool? For example, if I use the Image generator tool, it gives the right output but the anwer is always verbose:

Plain Text
from llama_index.agent.openai import OpenAIAgentWorker, OpenAIAgent
# Import and initialize our tool spec
from .image_tool_spec import TextToImageToolSpec

text_to_image_spec = TextToImageToolSpec()
tools = text_to_image_spec.to_tool_list()
# Create the Agent with our tools
agent = OpenAIAgent.from_tools(tools, verbose=False)
print(agent.chat("show an of a beautiful beach with a palm tree at sunset"))

Thanks!
Attachments
image.png
image.png
L
S
8 comments
I think tools need a "return direct" option -- but its something we haven't implemented yet
Ahhh so pity... I need it so much... 😦
Actually, I found a kinda workaround, not sure though if it's a right way. Instead of chatting with agent, I can create a task:
Plain Text
agent = OpenAIAgent.from_tools(tools, verbose=False)
task = agent.create_task("show an of a beautiful beach with a palm tree at sunset")
output = agent.run_step(task.task_id)
print(output.output.sources[0].raw_output)
Attachment
image.png
ah that works!
Would be nice to save time by not having the LLM call to interpret the tool response though haha
Yes, exactly! Because sometimes, all we need is just to generate or do something with our plain language, and that's it πŸ™‚
I agree! Its on my todo to make that return_direct/skip reasoning feature implemented
Add a reply
Sign up and join the conversation on Discord