Find answers from the community

Updated 2 years ago

Hello

At a glance

The community member is initializing a model using the HuggingFaceLLMPredictor and creating a GPTVectorStoreIndex. However, they are encountering an AuthenticationError when initializing the index, stating that no API key is provided. The community member checked a related issue on Github but the link provided does not seem to be helpful.

In the comments, another community member suggests that the community member will need to provide an embed model as well, as the LLM predictor is only used for generating text. They provide a link to the documentation on custom embeddings. Another community member then asks why the same code works with GPTListIndex, and the response is that the list index does not use the embed model, so the error does not occur.

Useful resources
Hello,

I initialise my model like this:
Plain Text
stablelm_predictor = HuggingFaceLLMPredictor(
    max_input_size=4096, 
    max_new_tokens=256,
    temperature=0.7,
    do_sample=False,
    system_prompt=system_prompt,
    query_wrapper_prompt=query_wrapper_prompt,
    tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
    model_name="StabilityAI/stablelm-tuned-alpha-3b",
    device_map="auto",
    stopping_ids=[50278, 50279, 50277, 1, 0],
    tokenizer_kwargs={"max_length": 4096},
    # uncomment this if using CUDA to reduce memory usage
    # model_kwargs={"torch_dtype": torch.float16}
)

ServiceContext:
Plain Text
service_context = ServiceContext.from_defaults(
    prompt_helper=prompt_helper, 
    llm_predictor=stablelm_predictor
)

Finally, index:
Plain Text
index = GPTVectorStoreIndex.from_documents(
    documents, 
    service_context=service_context
)

When I initialise the index, I get the following error:
Plain Text
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.

Why am I still getting this error if I pass a model in service_context?

I checked the related #852 on Github but the link https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html#example-using-a-custom-llm-model
Thanks in advance for your help!
v
L
6 comments
You'll need to also provide an embed model, the llm predictor is only used for generating text
@Logan M Thanks! But why is exactly the same code working perfectly well with GPTListIndex then?
The list index doesn't use the embed model, so the error does not happen. Very sneaky πŸ₯·
@Logan M i see, thanks!
Add a reply
Sign up and join the conversation on Discord