Find answers from the community

Updated 6 months ago

Non-GPU

At a glance

The community member is trying to use the example from the LlamaIndex documentation with the ollama library, but the process of indexing a simple text file is taking more than 4 minutes to complete. They are using a cloud-based machine with 32 GB RAM and 4 core CPU. The community member is wondering if there is a way to speed up the process and also notes that the documents still use LLaMA 3, suggesting an update to 3.1 would be appreciated.

In the comments, another community member suggests that non-GPU machines tend to take more time compared to GPU-enabled machines.

Useful resources
I am trying this example https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/ with ollama, now to query simple file, it's taking more than a 4 minutes to respond and complete the script.
I am using this text file https://sherlock-holm.es/stories/plain-text/advs.txt
It looks like this step is taking lot of time
Plain Text
index = VectorStoreIndex.from_documents(
    documents,
)

I am using 32 GB RAM with 4 core CPU in cloud.
Is there anyway I can speed up the process.
Also, I see the documents still use llama3, would be great if updated to 3.1
Attachment
image.png
W
1 comment
Yea Non-GPU machines tend to take more time in comparison to GPU ones
Add a reply
Sign up and join the conversation on Discord