Find answers from the community

Updated 3 months ago

AuthenticationError

hi, I'm getting this everytime while using the llama_idex to retrieve from pdfs raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x2250e91d6f0 state=finished raised AuthenticationError>] Can anyone help I'm using .env file to store my API key and tried other methods as well to call the key
W
d
L
7 comments
There has been some issues with openAI in the newer versions.

Did you try setting up the key like?
Plain Text
import openai
openai.api_key = 'ADD_KEY_HERE'
yes I did basically I tried every method available on the Open Ai
This is strange, what is the version that you are trying with.
Also can you share your code
Yea the above has always worked for me (in addition to setting it in your env)

You could also print(os.environ["OPENAI_API_KEY"]) to confirm its in your env, and then use that to set the openai module
import gradio as gr
from llama_index import SimpleDirectoryReader, LLMPredictor, PromptHelper, StorageContext, ServiceContext, \
GPTVectorStoreIndex, load_index_from_storage
from langchain.chat_models import ChatOpenAI
import openai
import sys

import os
from dotenv import load_dotenv

load_dotenv()
openai_api_key = os.environ["OPENAI_API_KEY"]


def create_service_context():
max_input_size = 4096
num_outputs = 512
max_chunk_overlap = 0.2
chunk_size_limit = 6000

prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

# LLMPredictor is a wrapper class around LangChain's LLMChain that allows easy integration into LlamaIndex
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.5, model_name="gpt-3.5-turbo", max_tokens=num_outputs))

# Constructs service_context
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
return service_context


directory_path = r'C:\Users\tejas\OneDrive\Desktop\docs'


def data_ingestion_indexing(directory_path):
documents = SimpleDirectoryReader(directory_path).load_data()

index = GPTVectorStoreIndex.from_documents(
documents, service_context=create_service_context()
)

index.storage_context.persist()
return index


def data_querying(input_text):
storage_context = StorageContext.from_defaults(persist_dir="./storage")

index = load_index_from_storage(storage_context, service_context=create_service_context())

response = index.as_query_engine().query(input_text)

return response.response


iface = gr.Interface(
fn=data_querying,
inputs=gr.components.Textbox(lines=7, label="Enter your query"),
outputs="text",
title="llamaIndex"
)

index = data_ingestion_indexing(directory_path)
iface.launch(share=False)
  1. Did you try printing openai_api_key to see if it is getting any key or not from env file?
  2. Your chunk size limit is very huge. It should be less than that. Default is 1024
okay let me try that
Add a reply
Sign up and join the conversation on Discord