The community member is using the VectorStoreIndex.from_documents() method from the llama_index library and applying the SentenceSplitter transformation, even though they have already set the Settings.chunk_size to 512. A comment suggests that setting the chunk size in the settings only changes the Settings.node_parser, and does not affect the need to apply the SentenceSplitter transformation. There is no explicitly marked answer in the post or comments.
In below code, why we need to apply sentencesplitter transformation when we already set Settings.chunk_size as some value. Code is taken from official documentation - # Global settings from llama_index.core import Settings
Settings.chunk_size = 512
Local settings
from llama_index.core.node_parser import SentenceSplitter
index = VectorStoreIndex.from_documents( documents, transformations=[SentenceSplitter(chunk_size=512)] )