I'm trying to use the DatasetGenerator.from_documents() function but because of my local resource limitations (and not using OpenAI) I don't have enough tokens to generate the full list of questions that gets returned like it does in the docs Is there anyway to force further generation to keep showcasing questions from the document_summary_index?
You can't force sadly, token limitations can't be changed
But, you can decrease the chunk_size of your nodes before running the generation, so that the text from the nodes takes up less space and leaves more room for questions?