calls = [] for prompt in prompts: calls.append(llm.acomplete(prompt)) results = await asyncio.gather(*calls)
AsyncOpenAI
class, do I need to use it as an llm or normal OpenAI
is fine? If OpenAI
is fine, then why do we need AsyncOpenAI
?embeddings = [] for text in texts: embeddings.append(Settings.embed_model.aget_text_embedding(text) results = await asyncio.gather(*embeddings)
import asyncio from llama_index.embeddings.openai import OpenAIEmbedding async def embed(): texts = ['one', 'two', 'three'] embed_model = OpenAIEmbedding() jobs = [] for text in texts: jobs.append(embed_model.aget_text_embedding(text)) embeddings = await asyncio.gather(*jobs) # max batch size is 2048 with openai embed_model = OpenAIEmbedding(embed_batch_size=2000) embeddings = await embed_model.aget_text_embedding_batch(texts) return embeddings asyncio.run(embed())
embeddings = embed_model.get_text_embedding_batch(texts)
aget_text_embedding_batch(texts)
(with a
prefix). How to use that one? And is it faster than a simple get_text_embedding_batch
?