I am trying this example
https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/ with ollama, now to query simple file, it's taking more than a 4 minutes to respond and complete the script.
I am using this text file
https://sherlock-holm.es/stories/plain-text/advs.txtIt looks like this step is taking lot of time
index = VectorStoreIndex.from_documents(
documents,
)
I am using 32 GB RAM with 4 core CPU in cloud.
Is there anyway I can speed up the process.
Also, I see the documents still use llama3, would be great if updated to 3.1