The post describes an error in changing the global tokenizer to match a GGUF LLM, as the Hugging Face repository only has GGUF files and not the config.json file. Community members suggest loading the tokenizer for a non-GGUF version of the model, downloading the tokenizer files locally, and providing the local file path to the AutoTokenizer. However, there are some issues with providing the local path, and the community members are trying to find the correct command to ensure it is working.
1) when you say "tokenizer", which files you are referring from below screenshot. 2) I am building this code for on-edge application. Is there any way we can bundle tokenizer file(s) with the software so user don't have to download the tokenizer since customer's machine will not be connected to the internet. Thanks.