Hello,
I initialise my model like this:
stablelm_predictor = HuggingFaceLLMPredictor(
max_input_size=4096,
max_new_tokens=256,
temperature=0.7,
do_sample=False,
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
model_name="StabilityAI/stablelm-tuned-alpha-3b",
device_map="auto",
stopping_ids=[50278, 50279, 50277, 1, 0],
tokenizer_kwargs={"max_length": 4096},
# uncomment this if using CUDA to reduce memory usage
# model_kwargs={"torch_dtype": torch.float16}
)
ServiceContext:
service_context = ServiceContext.from_defaults(
prompt_helper=prompt_helper,
llm_predictor=stablelm_predictor
)
Finally, index:
index = GPTVectorStoreIndex.from_documents(
documents,
service_context=service_context
)
When I initialise the index, I get the following error:
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
Why am I still getting this error if I pass a model in
service_context
?
I checked the related #852 on Github but the link
https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html#example-using-a-custom-llm-modelThanks in advance for your help!