----------------------
Yes, there is a way to get metadata when using the chat engine in LlamaIndex. The metadata can be accessed using
response.metadata
. This metadata often includes information about the source document or filename from which the response comes.
However, if you're getting an empty list, it might be due to the specific configuration of your chat engine or the nature of the query. It's also possible that the response doesn't have any associated metadata.
Here are some examples of how metadata is accessed in different contexts:
- In the Knowledge Graph RAG Query Engine, metadata is accessed as follows:
import pprint
pp = pprint.PrettyPrinter()
pp.pprint(response.metadata)
Source ()
- In the Fuzzy Citation Query Engine, metadata is accessed and used to get the exact character location of the response from the original document:
for chunk_info in response.metadata.values():
start_char_idx = chunk_info["start_char_idx"]
end_char_idx = chunk_info["end_char_idx"]
node = chunk_info["node"]
node_start_char_idx = node.start_char_idx
node_end_char_idx = node.end_char_idx
# using the node start and end char idx, we can offset the
# citation chunk to locate the citation in the
document_start_char_idx = start_char_idx + node_start_char_idx
document_end_char_idx = document_start_char_idx + (end_char_idx - start_char_idx)
text = documents[0].text[document_start_char_idx:document_end_char_idx]
print(text)
print(node.metadata)
print("----------------")
Source ()