Find answers from the community

s
F
Y
a
P
Updated 11 months ago

Is this example from the documentation

Is this example from the documentation complete? I tried this with my postgres db and local redis but I don't see the data in my vector DB:

from llama_index import Document
from llama_index.embeddings import OpenAIEmbedding
from llama_index.text_splitter import SentenceSplitter
from llama_index.extractors import TitleExtractor
from llama_index.ingestion import IngestionPipeline, IngestionCache
from llama_index.ingestion.cache import RedisCache


pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(chunk_size=25, chunk_overlap=0),
TitleExtractor(),
OpenAIEmbedding(),
],
cache=IngestionCache(
cache=RedisCache(
redis_uri="redis://127.0.0.1:6379", collection="test_cache"
)
),
)

Ingest directly into a vector db

nodes = pipeline.run(documents=[Document.example()])
L
5 comments
there's no vector db here?
I think the comment in the code is misleading
this is just an example of using a cache
you can put the nodes into your vector store, or attach a vector db to the ingestion pipeline
Add a reply
Sign up and join the conversation on Discord