The list index sends every node in your index to openai, and builds a bottom of tree of answers
So it's already split into chunks. Maybe your index is too big? Do you actually need to send every node to the LLM?
You can also try setting
# apply nesting if in a notebook or server
import nest_asyncio
nest_asyncio.apply()
index.as_query_engine(response_mode="tree_summarize", use_async=True)