Find answers from the community

s
F
Y
a
P
Updated last month

Summarization time

I'm working on a task to save and QA a PDF document, and it takes about 2523ms with VectorIndex, but much longer with ListIndex (I just stopped before the result came out).

I know that ListIndex takes longer because it reads the entire document, but does it always take longer whenever the process of reading the entire document is required, for example, for summarization?
L
s
12 comments
Yea, a list index will always take a while, especially if the index is large.

There are a few things you can do to help with this though

First, for summarization, make sure you use response_mode="tree_summarize"

You can also enable async so that it doesn't completely lock up your program (this won't speed it up per say, but at least your program can do other things while it's working in the background)

Last, you can enable streaming to stream the output (although I'm not sure what this looks like for a list index. It might not stream until the last LLM call?)

There are a few notebooks for the things I mentioned here:

https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo-streaming.ipynb
https://github.com/jerryjliu/llama_index/tree/main/examples/async
Sadly, the biggest limitation is how many LLM calls are made πŸ˜…
Thanks for the help!
I would like to ask one more thing.

If I want to do generation/correction task guided by the given docs, then I think the best choice will be the ListIndex. Because given query, we have to find the 'keyword' or 'topic' from the query and find the contents that covered those 'keyword' from the whole document.

Because if the query: "Business plan of llama-index is that it should be....", and hope to correct this with the guideline,

  1. We have to know that the "keyword" or "topic" of the query is "Business plan".
  2. We have to look over the guidelines for "Business plan."
  3. Following those guidelines, we have to generate/correct the query sentence.
For doing the step 2, I think we may have to see the whole documents.

Do you have any opinions or ideas?
Have you tried using a keyword index instead? From your description this seems like a natural choice.

You could also try a tree index. It can be a little expensive to build (similar cost compared to a query for a list index), but then each query will be cheaper/faster than a list index
I should find a keyword/topic from the query, and use that keyword I have to find some guideline from the index. Is it suitable to use keyword index in this case too?
Oh, I think it's possible and more proper way, just checked the official docs.
Yup, you got it! Llama Index will extract all the keywords for you
Or if my query looks like this,
"""- 2.1. Introduction of the item
  • Introduction of the {product_name}
    Our product is that...."""
Maybe we can manually split the main content(Our product is...) and the section name(Introduction of the item),

and use main content as a query and section name as a required_keywords.
Possibly! You can see the query llama index uses to extract keywords here. You could query gpt directly to get those keywords like you said

https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/default_prompts.py#L120

I would also try using the keyword index as is too, just in case it works (would be a simple solution if it works well)
Ok, I'll try!
  1. Where I said do it manually is the part where you said query gpt directly?
  2. Now I see that KeywordIndex extract keywords both from {query_str} and {text}, which is the stored documents, then I think I can add some custom prompt to correct the {query_str} based on {text} inclduing {keywords}
  1. Yea, that's the part. Just one idea though πŸ‘
  1. Seems like it! πŸ‘€πŸ€” would be interesting to try out. I should probably read the code for the keyword index a little closer so I know how it works haha
Ok, I'll try. Hope it works!
Add a reply
Sign up and join the conversation on Discord