hello there, I have some questions about LlamaIndex's Workflows.
Suppose I want to use LlamaParse for the first event and indexing as second event. Is there any pros to use LlamaParse asynchronously knowing that indexing has to happen after parsing?
If this was running in, say, a FastAPI server or similar, 10000% you should be using async, to make your server more effecient at serving requests and not blocking the main thread
The workflow is expected to be written in a regular python script which would then be hosted on Azure's VM and the script will be running on time trigger. So I guess no need for async?