Find answers from the community

Updated last month

Using finetuning engine in a non-blocking way

How can i use the finetuning engine in a non-blocking way? I get an error, when i try to finetune and resolve the engine. Is there a way to just read existing finetuning engines, based on your OpenAI key?
L
S
2 comments
I think if you use the raw openai client for finetuning, this should be straightforward
I thought the OpenAIFinetuningHandler provides this functionality, but i also implemented it by using just the raw OpenAI client πŸ™‚ The docs only offer a non-blocking approach to work with the handler, that's why i was asking. If anyone is intersted here a little snippet, to resolve the model for changing my actual inference code:

Plain Text
model = os.getenv("MODEL")
finetune_engine = OpenAIFinetuneEngine(
    model,
    "finetuning_events.jsonl"
)
finetune_engine.finetune()
current_job = finetune_engine.get_current_job()
while current_job.status != "succeeded":
    time.sleep(60)
    current_job = finetune_engine.get_current_job()
    print(f"Waiting for job to finish current status is {current_job.status}")
print("Your job is finished. Here is your job:")
print(current_job)
Add a reply
Sign up and join the conversation on Discord