Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
Load in 8bit
Load in 8bit
Inactive
0
Follow
F
Fred Bliss
2 years ago
Β·
hey all - is there a way to load huggingface models (local) in 8bit? i don't see the param in HuggingFaceLLMPredictor (it's a param in the transformers AutoModelForCausalLM)
L
1 comment
Share
Open in Discord
L
Logan M
2 years ago
You can pass this as a part of the model_kwargs, or you can load the model yourself and pass that in too if that's easier
https://gpt-index.readthedocs.io/en/latest/reference/llm_predictor.html
Add a reply
Sign up and join the conversation on Discord
Join on Discord