Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
But what local llm model I can use?
But what local llm model I can use?
Inactive
0
Follow
H
HerryC
last year
Β·
But what local llm model I can use? Could I use local deploy chatglm and have a good performance?
W
L
3 comments
Share
Open in Discord
W
WhiteFang_Jr
last year
For running LLM model locally you'll require a good GPU machine.
There is a LLM leaderboard on HF that ranks open-source LLM. Find more here:
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
LlamaIndex has also created a benchmark for LLMs. Find it here:
https://docs.llamaindex.ai/en/stable/module_guides/models/llms.html#llm-compatibility-tracking
L
LeMoussel
last year
I use
HuggingFaceH4
/zephyr-7b-beta
with some success.
L
LeMoussel
last year
See
https://colab.research.google.com/drive/1UoPcoiA5EOBghxWKWduQhChliMHxla7U
Add a reply
Sign up and join the conversation on Discord
Join on Discord