Find answers from the community

Updated 11 months ago

@Logan M why am i keep getting this

At a glance

The community member is encountering an error related to loading the OpenAI embedding model, even though they are using the Google Gemini model. The community members suggest that the issue is related to not setting up the embed model properly. They provide suggestions such as installing the llama-index-embeddings-gemini package and setting the Settings.embed_model to GeminiEmbedding(). However, the community member still encounters issues with the API key, specifically related to Google Cloud SDK credentials. The community members discuss how to properly set up the API key, either in the environment file or by hard-coding it, but there is no explicitly marked answer in the comments.

Useful resources
@Logan M why am i keep getting this error:

Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys

i am using google gemini and i hard code the api in it and i put it in the env file as : API_KEY = "api-is-here"
:
my code is:
import os from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage from llama_index.readers.file import PDFReader from dotenv import load_dotenv from llama_index.core import Settings from llama_index.llms.gemini import Gemini Settings.llm = Gemini() load_dotenv() def get_index(data, index_name): index = None if not os.path.exists(index_name): print("building index", index_name) index = VectorStoreIndex.from_documents(data, show_progress=True) index.storage_context.persist(persist_dir=index_name) else: index = load_index_from_storage( StorageContext.from_defaults(persist_dir=index_name) ) return index pdf_path = os.path.join("data", "the-tafsir-of-the-quran.pdf") canada_pdf = PDFReader().load_data(file=pdf_path) canada_index = get_index(canada_pdf, "canada") canada_engine = canada_index.as_query_engine()
L
C
12 comments
Because you didn't set up an embed model
so its defaulting to openai
how can i do that ?
with gemini :

for the other files i did this:

from llama_index.llms.gemini import Gemini GOOGLE_API_KEY = "api-here-"; os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY llm = Gemini()

and it work but even if i do that for this code above ^ message i am still geting the same error coding since i waked up 😦
Plain Text
pip install llama-index-embeddings-gemini


Plain Text
from llama_index.embeddings.gemini import GeminiEmbedding

Settings.embed_model = GeminiEmbedding()
Something like that
how about the api will it use the .env file api that i provide there such as: API_KEY = "api-here" or i have to hard code it
@Logan M now i did this:
Settings.llm = Gemini() GOOGLE_API_KEY = "api-here"; os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY Settings.embed_model = GeminiEmbedding()

new error:
raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.` i went the link it provided and i have qestion should i still use the api that i used for the gemini model api or diffent api since it is Embedding? and also for the api should i put it in the env and llamaindex will fetch it?
Idk how gemini works man, all I know is whats in the docs
nvm i readed the doc and found the issue in my code thanks tho man @Logan M
Add a reply
Sign up and join the conversation on Discord