Find answers from the community

Home
Members
stevehostettler
s
stevehostettler
Offline, last seen 3 months ago
Joined September 25, 2024
Hello, I am facing a rate limit when I generate questions using
from llama_index.llama_dataset.generator import RagDatasetGenerator
dataset_generator = RagDatasetGenerator.from_documents(
service_context = service_context,
documents = documents,
num_questions_per_chunk = 2, # set the number of questions per nodes
show_progress=True,
)
print(dataset_generator.generate_dataset_from_nodes())

I am NOT facing that problem when creating the indexes (I changed the batch size). Looking at the logs, the requests seems to go in parallel and does not wait

Is there a way to slow the requests down?
4 comments
s
L