Find answers from the community

Updated 3 months ago

Hey everyone. I'm looking to use the

Hey everyone. I'm looking to use the openai batch api (half price) to get embeddings in bulk. Is it feasible to pass embeddings directly to llamaindex, and would anyone be able to point me in a good direction for this?
L
d
4 comments
llama-index does not integrate with the batch mode (it didn't really make sense to do, llama-index is built around mostly real-time applications)
If I'm persisting an index with many documents, wouldn't that be a decent usecase for batching in the index-building step (from_documents)?

Maybe I'm misunderstanding the ingestion step
The batch api can take up to 24 hours to run πŸ˜… So you'd need to provide some weird api to schedule the embeddings, and then use those embeddings for ingestion

You can use the openai client itself to do this though, and just attach the embeddings to your nodes and insert them when the embeddings become available
thanks. yeah, the second piece of what you mentioned was what i was interested to do. to create the batch through openai client. but i wasn't sure how to attach insert as embeddings (as opposed to text)
Add a reply
Sign up and join the conversation on Discord