Find answers from the community

Updated last year

hello community

hello community!
I have a question, is there a way to use Google PaLM2 using the json credentials instead of API key? I see it here only with the api key: https://gpt-index.readthedocs.io/en/latest/examples/llm/palm.html
L
J
6 comments
Hmmm right now the only other option is passing it in directly

llm = PaLM(api_key="...")
so either that or the env var
definitely welcome a PR to change that though!
THanks @Logan M for the response.
So when you say env var, you mean setting up the GOOGLE_APPLICATION_CREDENTIALS with the path to the json key?

So from langchain documentation (which btw works very well for me to set up that GOOGLE_APPLICATION_CREDENTIALS):
To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:

Have credentials configured for your environment (gcloud, workload identity, etc...)
Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable
This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.


Surprisingly, when I try the same things with llama_index, i get the following:
google.api_core.exceptions.PermissionDenied: 403 Generative Language API has not been used in project xxxxxxx before or it is disabled.

I am using the exact same location for the json credentials for langchain and llama_index
I thiiiiink langchain is using something different (vertexAI) -- we are using googles generative AI package https://github.com/google/generative-ai-python

In any case, if the langchain version works well for you, you can still use it in llama-index, just have to wrap it

Plain Text
from llama_index.llms import LangChainLLM

llm = LangChainLLM(<palm llm from langchain>)
dang, this is some layer 8 issue i raised here in the end πŸ™‚ . thanks for the help!e
Add a reply
Sign up and join the conversation on Discord