Hey @Rohan - My scenario is to store index in Redis/pinecone and also store the index id somewhere. Later when there is query on the documents I indexed, I should be able to create an index out of the indexId. Please see the python code here
https://discord.com/channels/1059199217496772688/1133167189860565033/1232101699292893194My understanding of LlamaIndexTS is:
- Doesn't support Redis (as tested ok in my py implementation)
- Only supports SimpleDocumentStore, so it can only save to local disk
So for scenarios, where there is a lot of documents, where I cannot save locally in a cloud/server environment, I cannot have a DocStore and separately save Vector and SummaryIndex which I can load at a later time.
The following piece of code if can be done in TS will be very helpful: π Redis is not important but storing and loading like this is.
storage_context = StorageContext.from_defaults(
docstore=RedisDocumentStore.from_host_and_port(host=REDIS, port=19282, namespace="llama_index"),
index_store=RedisIndexStore.from_host_and_port(host=REDIS, port=19282, namespace="llama_index"),
)
storage_context.docstore.add_documents(nodes)
summary_index = SummaryIndex(nodes, storage_context=storage_context)
vector_index = VectorStoreIndex(nodes, storage_context=storage_context)
keyword_table_index = SimpleKeywordTableIndex(
nodes, storage_context=storage_context
)
# note down index IDs
list_id = summary_index.index_id
vector_id = vector_index.index_id
keyword_id = keyword_table_index.index_id
Then later I can do:
# load indices
summary_index = load_index_from_storage(
storage_context=storage_context, index_id=list_id
)
vector_index = load_index_from_storage(
storage_context=storage_context, index_id=vector_id
)
keyword_table_index = load_index_from_storage(
storage_context=storage_context, index_id=keyword_id
)
Really appreciate your time. Thanks