Find answers from the community

Updated 2 months ago

Error Creating HuggingFaceLLM

I've run into an error and haven't been able to figure it out. I'm following this to setup llama2 via huggingface https://colab.research.google.com/drive/14N-hmJ87wZsFqHktrw40OU6sVcsiSzlQ?usp=sharing#scrollTo=lMNaHDzPM68f. When I get to this line of code:
Plain Text
llm = HuggingFaceLLM(
    model_name="meta-llama/Llama-2-7b-chat-hf",
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    query_wrapper_prompt=PromptTemplate("<s> [INST] {query_str} [/INST] "),
    context_window=3900,
    model_kwargs={"token": hf_token, "quantization_config": quantization_config},
    tokenizer_kwargs={"token": hf_token},
    device_map="auto",
)

I get the following error: validation error for HuggingFaceLLM system_prompt none is not an allowed value (type=type_error.none.not_allowed) And I haven't been able to figure it out. (I'm a complete newbie here and this is my first time going through the llamaindex documentation, etc). Has anybody run into this before? I'm running on windows 11 WSL Ubuntu.
J
L
6 comments
The actual error/callstack is:
Plain Text
INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:01<00:00,  1.17it/s]
Traceback (most recent call last):
  File "/root/llm-contextualize/starter.py", line 34, in <module>
    llm = HuggingFaceLLM(
  File "/root/llm-contextualize/venv/lib/python3.10/site-packages/llama_index/llms/huggingface.py", line 228, in __init__
    super().__init__(
  File "/root/llm-contextualize/venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for HuggingFaceLLM
system_prompt
  none is not an allowed value (type=type_error.none.not_allowed)
Is this a bug, or is it a programmer error? I'm pretty sure I followed along with that notebook
Error Creating HuggingFaceLLM
well kapa.ai gave me some context that resolved this for me: I added the following code:
Plain Text
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True, # No idea what this means
    bnb_4bit_compute_dtype=torch.float16, # no idea why this is needed
    bnb_4bit_quant_type="nf4", # Magic string, no idea what this means
    bnb_4bit_use_double_quant=True, # No idea about what this does
)

llm = HuggingFaceLLM(
    model_name="meta-llama/Llama-2-7b-chat-hf",
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    query_wrapper_prompt=PromptTemplate("<s> [INST] {query_str} [/INST] "),
    context_window=3900,
    system_prompt=system_prompt,
    model_kwargs={"token": hf_token, "quantization_config": quantization_config},
    tokenizer_kwargs={"token": hf_token},
    device_map="auto",
)

I guess I need to lean where to get the syntax for these prompts for example "<s> [INST] {query_str} [/INST] " is pretty cryptic to me.
I guess I won't delete this in case it'll help someone else
query_wrapper_prompt=PromptTemplate("<s> [INST] {query_str} [/INST] "), -- it is cryptic, and you can blame the llama2 creators πŸ˜†

I see the bug. kapa somehow gave you a half-right answer, by ignore the system prompt it gave, just set system_prompt="" I think -- I'll patch the actual bug in the library
Add a reply
Sign up and join the conversation on Discord