I want to create a secondary index (default Vector Store Index) , which will have summarised chunks from the primary index. I need the node_ids in both the indexes to be same so that the context is retrieved from the primary index but the context passed to LLM for generation is from the secondary index. How can i do this , I was trying index.docstore.docs.values() to get the node_ids from primary index but that is not working. @Logan M
Hello , I have been using Llama index to build a RAG Pipeline. My ingested data is pdf's. I am trying to built a chatbot around it. But for a query like : 'Hi' or 'Thank You', it searches the index and returns some context based output. I want to prevent this, i.e., for general queries I want to not retrieve from the index and directly answer using the LLM and thus make it more user friendly and save time. Any suggestions ?