The community member is using LLM Sherpa as a parser and is wondering if there is a way to keep the chunk size when converting to nodes. The comments suggest that the community members can create the nodes on the fly using TextNode and then use VectorStoreIndex to index the nodes. The community members seem impressed with the capabilities of Llama Index and are grateful for it being open-source.