Index is not created after running the ingestion pipeline, if you pass in a vector_store to the ingestion pipeline, then vector embeddngs will be created for the processed nodes after running the pipeline.
If you wanna create an index from it later, while passing in your own IndexStore, then you can do something like this
from llama_index.core.storage.storage_context import StorageContext
storage_context = StorageContext.from_defaults(
vector_store=vector_store,
docstore=docstore,
index_store=your_custom_index_store
)
index = VectorStoreIndex(nodes=[], storage_context=storage_context)