Find answers from the community

Updated last year

Veery minor comment about the llamaindex

Veery minor comment about the llamaindex blog made for v 0.9. I think there's a small typo in a piece of the sample code for saving and loading ingestion pipelines from local cache. At the bottom it says
Plain Text
new_pipeline = IngestionPipeline(
    transformations=[
        SentenceSplitter(chunk_size=25, chunk_overlap=0),
        TitleExtractor(),
    ],
    cache=new_cache,
)
# will run instantly due to the cache
nodes = pipeline.run(documents=[Document.example()])

While I'm guessing it should be
nodes = new_pipeline.run(... instead of pipeline.run(...
L
O
3 comments
ha good catch! You are right
You're welcome hahaha, I was worried I wasn't understanding what was going on at first πŸ˜…
Add a reply
Sign up and join the conversation on Discord