Hello everyone, not sure this is the right channel but, after the relevant text is retrieved from a document, what Question answering model are you using at the moment?
So after the relevant text are retrieved. They are sent to LLM ( It can be any LLM GPT-3,3.5,4 or any open source model ). There are different Prompt templates that you can use to present your retrieved template in a more meaningful manner.
yes sorry my question wasn't clear, I was just curious about what LLM model you are using at the moment, I would like to use the smallest model i can of course