----------------------
The default chunk size for splitting documents into excerpts is not explicitly mentioned in the provided extracts. However, in the examples provided, the chunk size is often set manually when creating the
ServiceContext
or
LLMPredictor
objects. For instance, in some examples, the chunk size is set to 512 or 1024.
Here is an example of how the chunk size is set when creating a
ServiceContext
:
service_context = ServiceContext.from_defaults(chunk_size=512, llm_predictor=llm_predictor)
And here is an example of setting the chunk size when creating an
LLMPredictor
:
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-4", streaming=True))
service_context = ServiceContext.from_defaults(chunk_size=1024, llm_predictor=llm_predictor)
Please refer to the
LlamaIndex documentation () for more details on the default settings and how to customize them.
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------