mmmm not really π
A lot of it would be manually parsing the ipynb json stuff, which I think is pretty unique
But once you have the text/code in the chunks you want, creating the documents it's easy
from llama_index import Document, VectorStoreIndex
text_chunks = [...]
documents = [Document(t) for t in text_chunks]
index = VectorStoreIndex.from_documents(documents)
Adjusting the chunk size is easy though!
from llama_index import ServiceContext, VectorStoreIndex
service_context = ServiceContext.from_defaults(..., chunk_size=2048)
index = VectorStoreIndex.from_documents(documents)