Find answers from the community

s
F
Y
a
P
Updated last month

LlamaCPP

Hello,is there any way in llamaindex to find llama.cpp is running on gpu
L
1 comment
When it first loads the model, there should be a ton of prints about allocating to the gpu or something

Basically you need to make sure you installed with gpu support, and set num_gpu_layers to something other than zero
Add a reply
Sign up and join the conversation on Discord