I am looking for a text extraction use case. Essentially there is a large corpus and i am doing semantic chunking and storing it in a vector db. I want to retrieve from the index top_k matches, re-rank and organize the insights into a document by topic. Of course i can raw dog all of this with LLMs, but i am trying to figure if any of the llamaindex abstractions might be useful beyond chunking and indexing. Fore reference, top_k here might be of the order of ~1000. Of course there is a summarization element to it, but the idea is not to dump everything into the context window once and do some generation.