Find answers from the community

Updated 2 years ago

Are there any good examples of using

Are there any good examples of using llama_index with a model on the Hugging Face Inference API? I know I'll load the model using llm = HuggingFaceHub(...), but (a) I seem to still need a local embedding model? and (b) Even when I use a local embedding model, I get "Empty Response" in an app where using llm = GPT4All(...) works well.
L
c
2 comments
Yea you still need an embedding model...

I've never used huggingface hub before though. What does your setup look like?
Here's what I've got, this is ugly in any ways: https://pastebin.com/aGPk0asK
Add a reply
Sign up and join the conversation on Discord