Log in
Log into community
Find answers from the community
s
F
Y
a
P
3,278
View all posts
Related posts
Did this answer your question?
๐
๐
๐
Powered by
Hall
Inactive
Updated last month
0
Follow
LlamaCPP
LlamaCPP
0
Follow
a
adeelhasan
last year
ยท
Hello,is there any way in llamaindex to find llama.cpp is running on gpu
L
1 comment
Share
Open in Discord
L
Logan M
last year
When it first loads the model, there should be a ton of prints about allocating to the gpu or something
Basically you need to make sure you installed with gpu support, and set num_gpu_layers to something other than zero
Add a reply
Sign up and join the conversation on Discord
Join on Discord