Find answers from the community

Updated 10 months ago

Issue

At a glance
hey guys, im trying to set up an LLM remotely using llama_index.llms. I've tried it with both llama_index.llms.openai as well as llama_index.llms.huggingface but im getting connection errors for both. Any idea what could be causing this issue? My code was working ~3 months ago.

This is the code im using for both approaches respectively. Both API keys are fine (i've successfully used both keys for other function calls).

For OpenAI:
Plain Text
from llama_index.llms.openai import OpenAI

resp = OpenAI().complete("Paul Graham is ")
print(resp)


For HuggingFace:
Plain Text
# from llama_index.llms import HuggingFaceInferenceAPI, HuggingFaceLLM
from llama_index.llms.huggingface import HuggingFaceInferenceAPI, HuggingFaceLLM

HF_TOKEN = "hf_QjPuvxDhxxxxxxxxxxxxqpQDiH"

llm = HuggingFaceInferenceAPI(
    model_name="HuggingFaceH4/zephyr-7b-alpha", token=HF_TOKEN
)

print(llm)

completion_response = llm.complete("To infinity, and")
print(completion_response)
L
y
7 comments
What's the actual error?
hey thanks for the response!
these are the errors i get respectively
The openai error is saying the api key is empty
Did you set it?

You can set it in your terminal env

In your script

Plain Text
import os
os.environ["OPENAI_API_KEY"] = "sk-..."


Or you can set it in the module

llm = OpenAI(..., api_key="sk-...")
The huggingface error just means it's overloaded I think -- probably if using the free instance, it gets overloaded easily
gotcha! managed to resolve the error, thanks so much man
Add a reply
Sign up and join the conversation on Discord