Find answers from the community

Updated 4 weeks ago

Leveraging LlamaIndex's Workflows: Parsing and Indexing Considerations

hello there, I have some questions about LlamaIndex's Workflows.

Suppose I want to use LlamaParse for the first event and indexing as second event. Is there any pros to use LlamaParse asynchronously knowing that indexing has to happen after parsing?
L
g
4 comments
Depends on where you end up running this code.

If this was running in, say, a FastAPI server or similar, 10000% you should be using async, to make your server more effecient at serving requests and not blocking the main thread
The workflow is expected to be written in a regular python script which would then be hosted on Azure's VM and the script will be running on time trigger. So I guess no need for async?
In that case, probably not
Add a reply
Sign up and join the conversation on Discord