What llama-index version do you have?
Setting the chunk_size/chunk_overlap definitely seems to work
>>> ctx = ServiceContext.from_defaults(chunk_size=20, chunk_overlap=2)
>>> ctx.node_parser
SentenceSplitter(include_metadata=True, include_prev_next_rel=True, callback_manager=<llama_index.callbacks.base.CallbackManager object at 0x7efd72b234d0>, chunk_size=20, chunk_overlap=2, separator=' ', paragraph_separator='\n\n\n', secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?')
And passing in
text_splitter
also works
>>> from llama_index.text_splitter import SentenceSplitter
>>> ctx = ServiceContext.from_defaults(llm=None, embed_model=None, text_splitter=SentenceSplitter(chunk_size=20, chunk_overlap=2))
>>> ctx.node_parser
SentenceSplitter(include_metadata=True, include_prev_next_rel=True, callback_manager=<llama_index.callbacks.base.CallbackManager object at 0x7efd40669710>, chunk_size=20, chunk_overlap=2, separator=' ', paragraph_separator='\n\n\n', secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?')