The community members are discussing a tool that could be optimized for a many-to-one Q&A using LlamaIndex. Some suggestions include using the query engine to generate multiple questions and mapping them to a single answer, or using a hybrid approach between classic AI chatbots and generative AI. The community members also discuss the possibility of utilizing metadata to specify the ID of the answer for each question. Overall, the discussion is focused on finding an efficient way to handle multiple questions that map to a single answer using LlamaIndex.
Could this be done with the query engine? There's a tool that will generate more than one question before doing a check in your specified index. Optionally, you could instruct your LLM in the prompt to use a specific tool for "time related requests" (I think this would use langchain to allow the LLM to talk to whatever tool you specify).
So you have a certain list of questions, then I’d use LlamaIndex to read the index to find the closest match to the question being asked, and map it to an answer
Yea to me, this sounds like the sub-question query engine. It reads an input query, breaks it down into not only sub questions, but also which index to send the question to, and then aggregates the result 🤔