Find answers from the community

Updated 5 months ago

LlamaCPP

At a glance

The community member is asking if there is a way to determine if llama.cpp is running on a GPU in the LlamaIndex library. A comment suggests that when the model is first loaded, there should be a lot of output about allocating to the GPU. The comment also advises the community member to ensure they have installed the library with GPU support and to set the num_gpu_layers parameter to a value other than zero.

Hello,is there any way in llamaindex to find llama.cpp is running on gpu
L
1 comment
When it first loads the model, there should be a ton of prints about allocating to the gpu or something

Basically you need to make sure you installed with gpu support, and set num_gpu_layers to something other than zero
Add a reply
Sign up and join the conversation on Discord