----------------------
Yes, there is a way to get the output of the underlying tool directly without the synthesized response from the LLM. This can be achieved by setting the
return_direct
option to
True
in the tool class constructor when defining your tool. Here is an example:
tool = QueryEngineTool.from_defaults(
query_engine,
name="<name>",
description="<description>",
return_direct=True,
)
agent = OpenAIAgent.from_tools([tool])
response = agent.chat("<question that invokes tool>")
In this example, the query engine tool would be invoked, and the response from that tool would be directly returned as the response, and the execution loop would end. If
return_direct=False
was used, then the agent would rewrite the response using the context of the chat history, or even make another tool call.
You can find more details about this in the
LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------