I think some UX on the embedding model that allows for
- submitting the job (i.e.
job_name = embed_model.submit_text_embeddings(texts)
) - checking if the results are ready, and if so, returning the embeddings (batch jobs are generally huge, I wonder if this needs to be an iterator? I'm not sure how openai gives you the embeddings from the response, probably a paged response?) -- Maybe something like
for embedding_batch in embed_model.get_embedding_job_results(job_name):