I thought the
OpenAIFinetuningHandler
provides this functionality, but i also implemented it by using just the raw
OpenAI
client π The docs only offer a non-blocking approach to work with the handler, that's why i was asking. If anyone is intersted here a little snippet, to resolve the model for changing my actual inference code:
model = os.getenv("MODEL")
finetune_engine = OpenAIFinetuneEngine(
model,
"finetuning_events.jsonl"
)
finetune_engine.finetune()
current_job = finetune_engine.get_current_job()
while current_job.status != "succeeded":
time.sleep(60)
current_job = finetune_engine.get_current_job()
print(f"Waiting for job to finish current status is {current_job.status}")
print("Your job is finished. Here is your job:")
print(current_job)