Find answers from the community

Updated 8 months ago

Model

I am trying to use Groq with llama_index. Below is the code for the same. But, while executing code , I am getting : openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Invalid API Key'

My concern is not related to openai invalid api key. It's more of why it is even referring the openai key when I am not using it anywhere. My ideas was to have the application built on the open source model.
Does llama index needs to refer the openai for its internal working even though we mention open source models ? Please suggest @Logan M

#pip install llama-index-llms-groq
from llama_index.llms.groq import Groq
#pip install python-dotenv
from dotenv import load_dotenv
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core import PromptTemplate, Settings
from llama_index.core.embeddings import resolve_embed_model

def groq_ingest_load(query):
# only load PDFs files
required_exts = [".pdf"]

# load documents
loader = SimpleDirectoryReader(
"data",
required_exts= required_exts
)

documents = loader.load_data()

# create embeddings using HuggingFace model
embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")

prompt = PromptTemplate(template = template)

# define llms
llm = Groq(model="llama3-8b-8192", request_timeout= 3000)

# setting up llm and output tokens
Settings.llm = llm
Settings.num_output = 250

# define index
index = VectorStoreIndex.from_documents(documents)

# define query engine
query_engine = index.as_query_engine()

# Ask query and get response
response = query_engine.query(query)

print(response)
L
p
2 comments
You didn't use your embedding model

Either set it in settings

Settings.embed_model = ...

Or attach it directly to the index
VectorStoreIndex(..., embed_model=embed_model)
ahh !! missed it ! Thank you for catching this ! πŸ™‚
Add a reply
Sign up and join the conversation on Discord