The community member is trying to use Groq with llama_index, but is encountering an "openai.AuthenticationError" when executing the code. The community member's concern is not related to the OpenAI API key, but rather why the code is referring to the OpenAI key when it is not being used anywhere. The community member's intention is to build the application using open-source models, and is wondering if llama_index needs to refer to OpenAI for its internal working, even when using open-source models.
In the comments, another community member points out that the community member did not use the embedding model, and suggests either setting it in the settings or attaching it directly to the index. The original community member acknowledges this and thanks the other community member for catching it.
There is no explicitly marked answer in the post or comments.
I am trying to use Groq with llama_index. Below is the code for the same. But, while executing code , I am getting : openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Invalid API Key'
My concern is not related to openai invalid api key. It's more of why it is even referring the openai key when I am not using it anywhere. My ideas was to have the application built on the open source model. Does llama index needs to refer the openai for its internal working even though we mention open source models ? Please suggest @Logan M
#pip install llama-index-llms-groq from llama_index.llms.groq import Groq #pip install python-dotenv from dotenv import load_dotenv from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.core import PromptTemplate, Settings from llama_index.core.embeddings import resolve_embed_model
def groq_ingest_load(query): # only load PDFs files required_exts = [".pdf"]