----------------------
Yes, LlamaIndex does support batch requests. You can use the
get_text_embedding_batch
method to process multiple queries in one request. Here is an example of how you can do it:
texts = ["hello world", "goodbye world"]
response_batch = model.get_text_embedding_batch(texts)
print(response_batch)
This code will process the texts "hello world" and "goodbye world" in a single batch request. The response will be a batch of text embeddings corresponding to the input texts.
For more details, you can refer to the
LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------