LLMTextCompletionProgram
+ GroqCloud is hella slow?? I’ve tested GroqCloud native api from terminal and it is indeed super fast, but when I use with pydantic program it is slow as hell. Does it have some kind of loop internally?llama_llm = Groq(model="llama3-70b-8192", api_key=GROQ_API_KEY) program = LLMTextCompletionProgram.from_defaults( output_cls=Data, prompt_template_str=prompt, verbose=False, llm=llama_llm ) response = program(text=source_reference_comparison, num_sections=len(sections), sections=str(sections))
program = LLMTextCompletionProgram.from_defaults( output_cls=Data, prompt_template_str=prompt, verbose=False, llm=llama_llm ) messages = program._prompt.format_messages( llm=program._llm, text=source_reference_comparison, num_sections=len(sections), sections=str(sections) ) response = program._llm.chat(messages) output = program._output_parser.parse(raw_output)