Find answers from the community

Updated 3 months ago

Hi all I m relatively new to llama index

Hi all, I'm relatively new to llama index. I was wondering if anyone could advise me on how go about showing/logging the chunks retrieved to answer the intermediate questions when using SubQuestionQueryEngine.

I asked kapa which advised me to modify the source code, I assume there is already a method to show what the chunks that have been retrieved which I can either use or atleast take inspiration from.
https://discordapp.com/channels/1059199217496772688/1123701370000769155

I'd appreciate it if anyone could advise of point me towards any useful documentation.
L
N
3 comments
hmm, it's not EASILY available, but there is a way

Using the LlamaDebugHandler, the RETRIEVE events should log the nodes retrieved for each sub-query

https://gpt-index.readthedocs.io/en/latest/examples/callbacks/LlamaDebugHandler.html
Thanks, I now see the relevant facts extracted from the chunks. LlamaDebugHandler also shows these as nodes used to answer each sub question https://pastebin.com/td3Gd5iV. Will take another look at the source code to hunt down the method producing the chunks.
Attachment
Screenshot_from_2023-06-29_14-21-14.png
Nice! You might also want to pass the service context into your query tools as well, to get more complete tracing.

To make it more organized, each sub-index could have it's own callback manager instance πŸ€”
Add a reply
Sign up and join the conversation on Discord