----------------------
Yes, you can manually supply a set of 'nodes' retrieved from a retrieval engine for use in a filtered query engine query. You can do this by first getting the full set of nodes from a query call, and then sending them to a node postprocessor. Finally, you can synthesize a response through a summary index. Here is an example of how you can do this:
from llama_index import SummaryIndex
query_str = "Your query string here"
# Create a query engine
query_engine = index.as_query_engine(similarity_top_k=3, response_mode="no_text")
# Get the initial response
init_response = query_engine.query(query_str)
# Get the nodes from the initial response
resp_nodes = [n.node for n in init_response.source_nodes]
# Create a summary index from the nodes
summary_index = SummaryIndex(resp_nodes)
# Create a new query engine with the node postprocessors
query_engine = summary_index.as_query_engine(node_postprocessors=[node_postprocessor])
# Get the final response
response = query_engine.query(query_str)
In this example,
node_postprocessor
is a placeholder for the node postprocessor you want to use. You can replace it with the actual node postprocessor you want to use.
This code is based on the examples provided in the
LlamaIndex documentation () and
GitHub repository ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------