in the prompt helper, it makes it so every piece of context is at most chunk_size_limit tokens long
If it needs to split the retrieved context, it will split it by overlapping the tokens the configured amount
If you also set the chunk_size_limit directly in the service context, it will also ensure the nodes created from your documents are at most that size. (but it overlaps by 200 tokens by default, which can be configured in the node parser)