Could this be done with the query engine? There's a tool that will generate more than one question before doing a check in your specified index. Optionally, you could instruct your LLM in the prompt to use a specific tool for "time related requests" (I think this would use langchain to allow the LLM to talk to whatever tool you specify).
So you have a certain list of questions, then I’d use LlamaIndex to read the index to find the closest match to the question being asked, and map it to an answer
Yea to me, this sounds like the sub-question query engine. It reads an input query, breaks it down into not only sub questions, but also which index to send the question to, and then aggregates the result 🤔