Find answers from the community

Updated 3 months ago

Is there a way to call OpenAIAgent or OpenAIAssistantAgent so that the output of the under

Is there a way to call OpenAIAgent or OpenAIAssistantAgent so that the output of the underlying tool is returned instead of the synthesized response from the llm?
k
2 comments
Please stand by @node0 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Yes, there is a way to get the output of the underlying tool directly without the synthesized response from the LLM. This can be achieved by setting the return_direct option to True in the tool class constructor when defining your tool. Here is an example:

Plain Text
tool = QueryEngineTool.from_defaults(
    query_engine,
    name="<name>",
    description="<description>",
    return_direct=True,
)

agent = OpenAIAgent.from_tools([tool])
response = agent.chat("<question that invokes tool>")


In this example, the query engine tool would be invoked, and the response from that tool would be directly returned as the response, and the execution loop would end. If return_direct=False was used, then the agent would rewrite the response using the context of the chat history, or even make another tool call.

You can find more details about this in the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord