Find answers from the community

Updated 4 months ago

Are OpenAI API keys handled differently

At a glance

The community members are discussing issues with using OpenAI API keys after a recent change. One community member is experiencing an error related to not finding the API key, and they confirm the key is set correctly in their environment variable. Another community member suggests passing the API key directly to the OpenAI function. The community members also discuss issues with using a PDF script, and one community member eventually figures out that they needed to import the openai module into the PDF reader file to resolve the issue.

Useful resources
Are OpenAI API keys handled differently after the switch? My code no longer works after they swapped to organization keys 😭
L
P
W
13 comments
Hmm, I think they still work the same? Maybe try a new key? Or whats the error?
Im getting this,


ValueError:
**
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

Consider using embed_model='local'.
Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules
**
Hmm, is the key in your env? You can also pas in the key directly
OpenAI(..., api_key="...")
Yeah it is in my env as OPEN_API_KEY = ...
It should be OPENAI_API_KEY
my bad it is
i have it correctly
sent in discord incorrectly
You can do it like how @Logan M suggested. If env is not working for you
Kind of figured out the issue, it breaks when I try to use my PDF script

any idea why?


using vector store index, creating embeddings for easy indexing and query, similarity search pretty much


import os
from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage

Change line 6 to different types of indexes

from llama_index.readers.file import PDFReader

def get_index(data, index_name):
index = None
if not os.path.exists(index_name):
print("building index", index_name)
index = VectorStoreIndex.from_documents(data,show_progress=True)
index.storage_context.persist(persist_dir=index_name)

else:
index = load_index_from_storage(
StorageContext.from_defaults(persist_dir=index_name)
)
pass

return index







pdf_path = os.path.join("data","dad.pdf")
dad_pdf = PDFReader().load_data(file=pdf_path)

dad_index = get_index(dad_pdf, "dad_info")
dad_engine = dad_index.as_query_engine()
I figured it out πŸ™‚
it was I needed to import openai into the PDF reader file and now it works
Add a reply
Sign up and join the conversation on Discord