Find answers from the community

Updated 3 months ago

openchat/openchat_3.5 · Hugging Face

Is it possible to use openchat/openchat_3.5 with LlamaIndex?
How to do?
r
L
W
5 comments
It will work. You just need to pass in the model name
Plain Text
llm = HuggingFaceLLM(
    model_name="TheBloke/openchat_3.5-GGUF",
    tokenizer_name="TheBloke/openchat_3.5-GGUF",
    query_wrapper_prompt=PromptTemplate("GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:"),
    context_window=4096,
    max_new_tokens=1024,
    generate_kwargs={"temperature": 0.1, "top_k": 50, "top_p": 0.95, "do_sample": True},
    device_map="auto",
)


Got error: TheBloke/openchat_3.5-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
I had success with the use of LlamaCPP
Add a reply
Sign up and join the conversation on Discord