I am currently using GROQ api with llama index to rum LLAMA3 70B model. I want to run this model on linux based server. I downloaded the model but dont know how to load it in llama index/groq. I don't want to change the rest of the code besides loading the LLM part. Is it possible to do this?
for testing the capabilities of llama3 70B i used groq api. Now that I am satisfied with its performance I want to deploy it. currently I am using llm = Groq(model="llama3-70b-8192", api_key=os.getenv('GROQ_API_KEY')) this to load the model. Is there any way to load my downloaded model using llama_index?