.chat()
response, I'm trying to collect information about the steps executed by the agent. I mainly need to know the source of that response. Most of the time, I'm using high level APIs for simplicity, but if you think I have some way making this customization going to a low-level api, I'll be glad to hear from you.source_nodes
prop and this is great. I'm able to identify the nodes used to compose that response.RouterQueryEngine
and that worked very well. But I ended up realziing I wanted to use a chat style, from chat engines or agents. So I wrapped that RouterQueryEngine
in a QueryEngineTool
and passed that into a OpenAIAgent
. There's just this single tool for the agent, but the agent always has to make an LLM call to then decide to use this single tool. Is there any way of setting this QueryEngineTool
as the default to be executed and skip this LLM call? Thanks!ToolRetrieverRouterQueryEngine.query
method working with FunctionTool
? Because it tries to call a query_engine
attribute in FunctionTool
which doesn't exist... More details in the thread.from_documents
it goes through the process of executing the transformations again and store a new line in that table, even though the documents are the same.update_ref_doc
from VectorStoreIndex
, I have to pass a Document
as argument. But how can I recover a node from the database as a Document
?