Find answers from the community

Updated 9 months ago

Model

At a glance

The community member is trying to use Groq with llama_index, but is encountering an "openai.AuthenticationError" when executing the code. The community member's concern is not related to the OpenAI API key, but rather why the code is referring to the OpenAI key when it is not being used anywhere. The community member's intention is to build the application using open-source models, and is wondering if llama_index needs to refer to OpenAI for its internal working, even when using open-source models.

In the comments, another community member points out that the community member did not use the embedding model, and suggests either setting it in the settings or attaching it directly to the index. The original community member acknowledges this and thanks the other community member for catching it.

There is no explicitly marked answer in the post or comments.

I am trying to use Groq with llama_index. Below is the code for the same. But, while executing code , I am getting : openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Invalid API Key'

My concern is not related to openai invalid api key. It's more of why it is even referring the openai key when I am not using it anywhere. My ideas was to have the application built on the open source model.
Does llama index needs to refer the openai for its internal working even though we mention open source models ? Please suggest @Logan M

#pip install llama-index-llms-groq
from llama_index.llms.groq import Groq
#pip install python-dotenv
from dotenv import load_dotenv
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core import PromptTemplate, Settings
from llama_index.core.embeddings import resolve_embed_model

def groq_ingest_load(query):
# only load PDFs files
required_exts = [".pdf"]

# load documents
loader = SimpleDirectoryReader(
"data",
required_exts= required_exts
)

documents = loader.load_data()

# create embeddings using HuggingFace model
embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")

prompt = PromptTemplate(template = template)

# define llms
llm = Groq(model="llama3-8b-8192", request_timeout= 3000)

# setting up llm and output tokens
Settings.llm = llm
Settings.num_output = 250

# define index
index = VectorStoreIndex.from_documents(documents)

# define query engine
query_engine = index.as_query_engine()

# Ask query and get response
response = query_engine.query(query)

print(response)
L
p
2 comments
You didn't use your embedding model

Either set it in settings

Settings.embed_model = ...

Or attach it directly to the index
VectorStoreIndex(..., embed_model=embed_model)
ahh !! missed it ! Thank you for catching this ! πŸ™‚
Add a reply
Sign up and join the conversation on Discord