The post asks which large language model (LLM) is the fastest and has the least response time when deployed as a chatbot. The comments suggest that community members could try using LLMs from the Grok or Cerebras platforms, as they claim to provide the fastest text generation. However, the comments also note that the performance depends on the model parameters and infrastructure, and that high-parameter models may be slower. One community member also asks if the Grok platform is open-source and how they can add a human feedback loop to train the model for better responses.
I don’t think it is opensource. Also can you suggest how can I add human feedback loop to train the model to give better response @WhiteFang_Jr @saika @Logan M