text_qa_template
to the as_query_engine
method of a ComposableGraph
, you first need to define a custom prompt using the Prompt
class from llama_index
. This custom prompt will be used as your text_qa_template
. Here's an example of how to do this:from llama_index import Prompt, VectorStoreIndex, SimpleDirectoryReader # load documents documents = SimpleDirectoryReader('data').load_data() # define custom Prompt TEMPLATE_STR = ( "We have provided context information below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) QA_TEMPLATE = Prompt(TEMPLATE_STR) # Build index index = VectorStoreIndex.from_documents(documents) # Configure query engine query_engine = index.as_query_engine(text_qa_template=QA_TEMPLATE) # Execute query response = query_engine.query("What did the author do growing up?") print(response)
as_query_engine
method for a ComposableGraph
does not directly accept a text_qa_template
parameter. Instead, you can define custom query engines for its sub-indices, which can include the text_qa_template
:from llama_index.query_engine.graph_query_engine import ComposableGraphQueryEngine # Assuming 'graph' is your ComposableGraph object custom_query_engines = { graph.root_id: graph.root_index.as_query_engine(text_qa_template=QA_TEMPLATE) } query_engine = graph.as_query_engine(custom_query_engines=custom_query_engines)
ComposableGraph
.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:graph.as_query_engine
, you can use the get_formatted_sources
method on the response object. Here's an example:# Assuming 'query_engine' is your ComposableGraphQueryEngine object response = query_engine.query("Your query here") # Get source of response sources = response.get_formatted_sources() print(sources)
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:response.source_nodes
attribute is used to access the source nodes from the response of a query. Here's an example of how to use it:# Assuming 'query_engine' is your QueryEngine object response = query_engine.query("Your query here") # Get source nodes of response source_nodes = response.source_nodes print(source_nodes)
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:metadata
attribute of each source node. The filename is stored in the metadata
dictionary if it was set during the creation of the document. Here's an example of how to do this:# Assuming 'response' is the response from your query for source_node in response.source_nodes: filename = source_node.node.metadata.get('filename') print(filename)
metadata
dictionary during the creation of the document. If it wasn't, you will need to set it manually. Here's an example of how to do this:from llama_index import Document # Create a document with filename in metadata document = Document( text='text', metadata={ 'filename': '<doc_file_name>', 'category': '<category>' } )
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon: