It's only used for counting tokens, not for actually sending tokenized data
In any case, you can set with something like this
from llama_index.langchain_helpers.text_splitter import TokenTextSplitter
from llama_index.node_parser.simple import SimpleNodeParser
from llama_index import ServiceContext, GPTSimpleVectorIndex
tokenizer = <tiktoken stuff>
text_splitter = TokenTextSplitter(tokenizer=tokenizer)
node_parser = SimpleNodeParser(text_splitter=text_splitter)
service_context = ServiceContext.from_defaults(.., node_parser=node_parser)
index = GPTSimpleVectorIndex.from_documents(docs, service_context=service_context)