Find answers from the community

Updated 2 months ago

How can I manage introspect the chunk

How can I manage/introspect the chunk size in the TreeSummarize class? I am using gpt-4-32k to process a longish document that is still only 9k tokens long. My understanding is that this should easily fit in a single request. However when I run the tree summarizer in verbose mode, it shows:

3 text chunks after repacking
1 text chunks after repacking

which seems to indicate that it is dividing the input initiall up in to three chunks, even though it should not need to. I also note that I see the exact same thing if I run with gpt-3.5 even though this reports it's context window is just 4096. So it seems like some other default config parameter is resulting in the additional available context just not being used.
L
c
4 comments
how did you setup gpt-4-32k?
hopefully using the LLM class from llamaindex?
@Logan M yes. and it 'just worked' for some time. however it stopped responding after several requests so i don't know if it was maybe just a fluke or i went over the daily rate limit?
I thiiiink this is working fine? At least for me?

Tested gpt-3.5, gpt-3.5 16k, and gpt-4-32k

Plain Text
>>> from llama_index import ServiceContext, SummaryIndex, SimpleDirectoryReader
>>> from llama_index.llms import OpenAI
>>> ctx = ServiceContext.from_defaults(llm=OpenAI(model="gpt-3.5-turbo-16k"))
>>> documents = SimpleDirectoryReader("./docs/examples/data/paul_graham").load_data()
>>> index = SummaryIndex.from_documents(documents, service_context=ctx)
>>> res = index.as_query_engine(response_mode="tree_summarize").query("What did the author do growing up?")
2 text chunks after repacking
1 text chunks after repacking
>>> ctx = ServiceContext.from_defaults(llm=OpenAI(model="gpt-3.5-turbo"))
>>> index = SummaryIndex.from_documents(documents, service_context=ctx)
>>> res = index.as_query_engine(response_mode="tree_summarize").query("What did the author do growing up?")
6 text chunks after repacking
1 text chunks after repacking
>>> ctx = ServiceContext.from_defaults(llm=OpenAI(model="gpt-4-32k"))
>>> index = SummaryIndex.from_documents(documents, service_context=ctx)
>>> res = index.as_query_engine(response_mode="tree_summarize").query("What did the author do growing up?")
1 text chunks after repacking
Add a reply
Sign up and join the conversation on Discord