I'm attempting to use:
llm = HuggingFaceLLM(
...
tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
model_name="meta-llama/Llama-2-7b-chat-hf",
...)
But I keep getting an error: "ValueError: Need either a
state_dict
or a
save_folder
containing offloaded weights.". I've tried specifying an empty
save_folder
right in the HuggingFaceLLM() call, but that's an unexpected keyword, and I've also tried adding it to
generate_kwargs={}
and
tokenizer_kwargs={}
without success. I suspect it's not just looking for a blank folder, either. Any ideas?