Hello, I have an important doubt, even ChatGPT it looks can't solve. While everybody is talking about RAG's, and training them over GPT 3.5, Llama2, etc... Nobody talks about that this models are filled of information I don't want for the specific RAG training. If I want a RAG in Medicine or whatever specific field, I have to build the RAG on top of a model that has info about cars, planes, lettuce, shoes... that I don't care about it. Is there an existence of a model that "only" knows how to talk and reason without all this info, so it's a virgin LLM ready to be trained in the specific field?
I believe training is required for any LLM to be able to understand and generate text!
You can always finetune the LLM as per yourr requirements and also finetune on teh cases where you dont want it to response and train it to respond as per training data on other categories. You can checkout the finetuning docs at LlamaIndex:https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html
But this is not what I was referring. I mean. I'm trying to know if there are basic LLM'S that do not weight that much that I can train and that they know how to talk