Find answers from the community

Updated 2 months ago

can i use llama-cpp-python with

can i use llama-cpp-python with llamaindex to run llama3 fine tune model in gguf format for hugging face?
L
3 comments
huggingface just added support for loading gguf files, so yes
Using HuggingFaceLLM
just load the model and tokenizer, and pass it in mostly

HuggingFaceLLM(model=model, tokenizer=tokenizer, ...)
Add a reply
Sign up and join the conversation on Discord