Find answers from the community

Home
Members
veerlosar
v
veerlosar
Offline, last seen 3 months ago
Joined September 25, 2024
Hello everyone! Could anyone please explain me (or give a link to the explanation) the meaning of a system_prompt? It's an argument for LLMPredictor
4 comments
v
L
v
veerlosar
·

Hello

Hello,

I initialise my model like this:
Plain Text
stablelm_predictor = HuggingFaceLLMPredictor(
    max_input_size=4096, 
    max_new_tokens=256,
    temperature=0.7,
    do_sample=False,
    system_prompt=system_prompt,
    query_wrapper_prompt=query_wrapper_prompt,
    tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
    model_name="StabilityAI/stablelm-tuned-alpha-3b",
    device_map="auto",
    stopping_ids=[50278, 50279, 50277, 1, 0],
    tokenizer_kwargs={"max_length": 4096},
    # uncomment this if using CUDA to reduce memory usage
    # model_kwargs={"torch_dtype": torch.float16}
)

ServiceContext:
Plain Text
service_context = ServiceContext.from_defaults(
    prompt_helper=prompt_helper, 
    llm_predictor=stablelm_predictor
)

Finally, index:
Plain Text
index = GPTVectorStoreIndex.from_documents(
    documents, 
    service_context=service_context
)

When I initialise the index, I get the following error:
Plain Text
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.

Why am I still getting this error if I pass a model in service_context?

I checked the related #852 on Github but the link https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html#example-using-a-custom-llm-model
Thanks in advance for your help!
6 comments
v
L