Are there use cases where decomposable graph makes more sense than subquestion query? I feel like sub-question can handle everything graph does?
Maybe this has to do with node post-processing. Is there a dynamic way to set similarity_top_k so I always use the maximum # that can fit inside the context window?
Does llama index offer any smart chunking algorithms? For example, instead of a fix length cutoff, can I do it by paragraphs or contextual topics?