Hi all! I want to generate question and context pairs using "from llama_index.core.evaluation import generate_question_context_pairs". So, generate_question_context_pairs, needs an LLM to generate it. I am using llama13b deployed on my private premises which gives output through an API call. I want to use it as my llm to generate those pairs. I could see from the docs that it takes OpenAI models. So how should I use the llma13b. Following is my API that calls llama13b
try: response = requests.post(url = self.api_url, data=json.dumps(data), headers=self.headers) response.raise_for_status() # Raise an HTTPError for bad responses (4xx and 5xx)
if response.json() is None: raise ValueError("Returned None String")
result = response.json()['outputs'][0]['data'][0].split("Question:")[-1][1:] return result
except requests.exceptions.RequestException as e: print(f"Error making API request: {e}") return None