hey guys, im trying to set up an LLM remotely using llama_index.llms. I've tried it with both
llama_index.llms.openai
as well as
llama_index.llms.huggingface
but im getting
connection errors for both. Any idea what could be causing this issue? My code was working ~3 months ago.
This is the code im using for both approaches respectively. Both API keys are fine (i've successfully used both keys for other function calls).
For OpenAI:
from llama_index.llms.openai import OpenAI
resp = OpenAI().complete("Paul Graham is ")
print(resp)
For HuggingFace:
# from llama_index.llms import HuggingFaceInferenceAPI, HuggingFaceLLM
from llama_index.llms.huggingface import HuggingFaceInferenceAPI, HuggingFaceLLM
HF_TOKEN = "hf_QjPuvxDhxxxxxxxxxxxxqpQDiH"
llm = HuggingFaceInferenceAPI(
model_name="HuggingFaceH4/zephyr-7b-alpha", token=HF_TOKEN
)
print(llm)
completion_response = llm.complete("To infinity, and")
print(completion_response)