Find answers from the community

Updated 2 months ago

Offload dir

I'm getting this error when attempting to load Writer/camel-5b-hf using GPU:

ValueError: The current device_map had weights offloaded to the disk. Please provide an offload_folder for them. Alternatively, make sure you have safetensors installed if the model you are using offers the weights in this format.

----

Where can I set the parameter for offload_folder?
L
r
6 comments
Hmm, maybe in the model_kwargs argument?

Tbh though, that sounds like you don't have enough RAM or VRAM... the performance will be quite slow
@Logan M yea... I kind of thought so. 😿

I don't see offload_folder as an acceptable param. Is it one of these?

(max_input_size: int = 4096, max_new_tokens: int = 256, temperature: float = 0.7, do_sample: bool = False, system_prompt: str = "", query_wrapper_prompt: SimpleInputPrompt = DEFAULT_SIMPLE_INPUT_PROMPT, tokenizer_name: str = "StabilityAI/stablelm-tuned-alpha", model_name: str = "StabilityAI/stablelm-tuned-alpha", model: Any | None = None, tokenizer: Any | None = None, device_map: str = "auto", stopping_ids: List[int] | None = None, tokenizer_kwargs: dict | None = None, model_kwargs: dict | None = None, callback_manager: CallbackManager | None = None) -> None
I'll try using a smaller model. Really I'm testing this to scale up to an AWS environment.
Yea, inside model_kwargs

model_kwargs={"offload_dir": "my dir"}

I'm guess that argument should go into the model anyways πŸ˜…
@Logan M this worked, thank you!

model_kwargs={ "torch_dtype": torch.bfloat16, "offload_folder": "./offload/" }
Nice! :dotsCATJAM:
Add a reply
Sign up and join the conversation on Discord