chunk_size_limit
within ServiceContext.from_defaults
and defining it within PromptHelper.from_llm_predictor
. I only defined chunk_size_limit
within SimpleNodeParser
as part of the text_splitter
: splitter = TokenTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) node_parser = SimpleNodeParser(text_splitter=splitter, include_extra_info=False, include_prev_next_rel=True)
chunk_size_limit
is not a parameter for SimpleNodeParser
. It is a parameter for both PromptHelper
and ServiceContext
. chunk_size_limit
within PromptHelper.from_llm_predictor
, you are setting the maximum chunk size to use for the PromptHelper
instance. This utility helps fill in the prompt, split the text, and fill in context information according to necessary token limitations.chunk_size_limit
within ServiceContext.from_defaults
, you are setting the maximum chunk size limit for the ServiceContext
instance. The ServiceContext
is a utility container for LlamaIndex index and query classes.TokenTextSplitter
with chunk_size
and chunk_overlap
parameters and using it in a SimpleNodeParser
. This is different from setting the chunk_size_limit
in PromptHelper
or ServiceContext
.chunk_size_limit
should be defined within PromptHelper
or ServiceContext
as needed, and not within SimpleNodeParser
.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:chunk_size_limit
for the PromptHelper
affects the query to the LLM, and how it effects indexes/query classes for ServiceContext
.PromptHelper
and ServiceContext
:Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon: