Find answers from the community

Updated 2 months ago

Load in 8bit

hey all - is there a way to load huggingface models (local) in 8bit? i don't see the param in HuggingFaceLLMPredictor (it's a param in the transformers AutoModelForCausalLM)
L
1 comment
You can pass this as a part of the model_kwargs, or you can load the model yourself and pass that in too if that's easier

https://gpt-index.readthedocs.io/en/latest/reference/llm_predictor.html
Add a reply
Sign up and join the conversation on Discord