Find answers from the community

Home
Members
yongtaufoo123
y
yongtaufoo123
Offline, last seen 4 months ago
Joined September 25, 2024
y
yongtaufoo123
·

Issue

hey guys, im trying to set up an LLM remotely using llama_index.llms. I've tried it with both llama_index.llms.openai as well as llama_index.llms.huggingface but im getting connection errors for both. Any idea what could be causing this issue? My code was working ~3 months ago.

This is the code im using for both approaches respectively. Both API keys are fine (i've successfully used both keys for other function calls).

For OpenAI:
Plain Text
from llama_index.llms.openai import OpenAI

resp = OpenAI().complete("Paul Graham is ")
print(resp)


For HuggingFace:
Plain Text
# from llama_index.llms import HuggingFaceInferenceAPI, HuggingFaceLLM
from llama_index.llms.huggingface import HuggingFaceInferenceAPI, HuggingFaceLLM

HF_TOKEN = "hf_QjPuvxDhxxxxxxxxxxxxqpQDiH"

llm = HuggingFaceInferenceAPI(
    model_name="HuggingFaceH4/zephyr-7b-alpha", token=HF_TOKEN
)

print(llm)

completion_response = llm.complete("To infinity, and")
print(completion_response)
7 comments
y
L