Find answers from the community

Updated 3 months ago

But what local llm model I can use?

But what local llm model I can use? Could I use local deploy chatglm and have a good performance?
W
L
3 comments
For running LLM model locally you'll require a good GPU machine.

There is a LLM leaderboard on HF that ranks open-source LLM. Find more here: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

LlamaIndex has also created a benchmark for LLMs. Find it here: https://docs.llamaindex.ai/en/stable/module_guides/models/llms.html#llm-compatibility-tracking
Add a reply
Sign up and join the conversation on Discord