Find answers from the community

Updated 3 months ago

Non-GPU

I am trying this example https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/ with ollama, now to query simple file, it's taking more than a 4 minutes to respond and complete the script.
I am using this text file https://sherlock-holm.es/stories/plain-text/advs.txt
It looks like this step is taking lot of time
Plain Text
index = VectorStoreIndex.from_documents(
    documents,
)

I am using 32 GB RAM with 4 core CPU in cloud.
Is there anyway I can speed up the process.
Also, I see the documents still use llama3, would be great if updated to 3.1
Attachment
image.png
W
1 comment
Yea Non-GPU machines tend to take more time in comparison to GPU ones
Add a reply
Sign up and join the conversation on Discord