The community member is comparing the performance of the SQLAutoVectorQueryEngine and the OpenAIAgent, and finding that the OpenAIAgent does not perform as well even when the SQLAutoVectorQueryEngine is passed as a tool. They are seeking insights into the differences between the two and how to determine why one performs better than the other.
The community member notes that the OpenAIAgent seems to be trying to answer the query itself and then passing it to the tool, which is not the desired behavior. They would prefer the chatbot to allow the tool to create the query in the first place.
In the comments, a community member suggests using a FunctionTool to wrap the router and giving that as a tool to the Agent, so that the Agent can decide when to use it or not. This is presented as a potential solution, but the community member is open to other approaches.
Alright, so I am comparing SQLAutoVectorQueryEngine to using the OpenAIAgent class. Even when I pass in the SQLAutoVectorQueryEngine as a tool to the OpenAI agent, the OpenAI agent doesn't perform as well. Any insights into the differences between the two? How can I narrow down why one performs better than the other?
From what I can see, the OpenAI agent is actually trying to answer the query itself and then passing the query into the tool. Which is not what I want ๐ฎโ๐จ I would rather the chatbot just allow the tool to create the query in the first place.
Also, if I just want a chatbot, but with the accuracy of the SQLAutoVectorQueryEngine, do I pass in an agent as one of its tools? That will help in the cases the user is chatting without asking for anything needed in the vector or sql databases
For now the best solution seems to be using a FunctionTool to wrap the router and giving that as a tool to the Agent. Then it can decide when to use it or not. Please let me know if anyone else has a better approach, thanks!