Hey ya'll! I'm super excited to be diving into in-context training and am loving llama_index. I'm wondering if anyone can help point me in the right direction for a response synthesis that is additive as opposed to refining. For example: I have a retriever with similarity_top_k=5 and I would like to generate 5 items for each set of nodes retrieved. I was trying to get this to work using Refine and modifying the refine_template but it really doesn't seem to want to attach a new answer to the old one in the response. Should I try a different response synthesizer (like maybe Tree Summarize?) or should I keep working with my prompts?
Currently no "additive" approach, but might make for a cool PR!
I think you were slightly on the right track with the templates, but you'll probably want to modify the refine_template instead instead of the text_qa_template
I think I would start by adding a new response mode (maybe "accumulate" ?) . Then from there, I think it's just a matter of implementing the the response builder for that mode