Find answers from the community

Updated 6 months ago

When Ingesting : `IngestionPipeline(

At a glance
When Ingesting : IngestionPipeline(transformations = ingestion_transformations, docstore=document_store, vector_store=vector_store) Where should I pass the KV index store such as SimpleKVStore.
R
1 comment
Index is not created after running the ingestion pipeline, if you pass in a vector_store to the ingestion pipeline, then vector embeddngs will be created for the processed nodes after running the pipeline.

If you wanna create an index from it later, while passing in your own IndexStore, then you can do something like this

Plain Text
from llama_index.core.storage.storage_context import StorageContext

storage_context = StorageContext.from_defaults(
  vector_store=vector_store,
  docstore=docstore,
  index_store=your_custom_index_store
)

index = VectorStoreIndex(nodes=[], storage_context=storage_context)
Add a reply
Sign up and join the conversation on Discord