Find answers from the community

Updated 3 months ago

Batch

is there a way using llama index to do embeddings using batchs requests ? (the openai ones that have 24 hours turnover?)
L
t
5 comments
Not at the moment. Been meaning to add something for that but tbh it's been low priority
Open to contributions though
if you have general guide lines (for ex what you would like to see on how to check if the request is processed) i'd be happy to do so
I think some UX on the embedding model that allows for
  • submitting the job (i.e. job_name = embed_model.submit_text_embeddings(texts))
  • checking if the results are ready, and if so, returning the embeddings (batch jobs are generally huge, I wonder if this needs to be an iterator? I'm not sure how openai gives you the embeddings from the response, probably a paged response?) -- Maybe something like for embedding_batch in embed_model.get_embedding_job_results(job_name):
Just quick thoughts without looking too deep lol
Add a reply
Sign up and join the conversation on Discord