Find answers from the community

Updated 8 months ago

Quick question on the basic SummaryIndex

Quick question on the basic SummaryIndex - If I have 1000 equivalent pages of C sharp, and I build a SummaryIndex, then say "Show me this code as a mermaid diagram" - would this work? im assuming that the main challenge is token limits of the LLM?
L
H
3 comments
it ... probably wont work?

It will send every node to the LLM. But if every node doesn't fit into the context window of the LLM, then it refines an answer iteratively, so that the LLM gets a chance to read all the nodes

I don't think this would fit this specific use-case
thank you! Is there a function within llama index to summarise a large document
That would be a summary index

More specifically, its the TreeSummarize module that it uses

Plain Text
index = SummaryIndex.from_documents(documents)
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("Summarize the provided text.")


Or more directly/low-level

Plain Text
from llama_index.core.response_synthesizers import TreeSummarize

synth = TreeSummarize(llm=llm)
summary = synth.get_response("Summarize the provided text.", ["text1", "text2", ...])
Add a reply
Sign up and join the conversation on Discord