Veery minor comment about the llamaindex blog made for v 0.9. I think there's a small typo in a piece of the sample code for saving and loading ingestion pipelines from local cache. At the bottom it says
new_pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(chunk_size=25, chunk_overlap=0),
TitleExtractor(),
],
cache=new_cache,
)
# will run instantly due to the cache
nodes = pipeline.run(documents=[Document.example()])
While I'm guessing it should be
nodes = new_pipeline.run(...
instead of
pipeline.run(...