Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated last year
0
Follow
Also would it be possible to use any of
Also would it be possible to use any of
Inactive
0
Follow
z
zach
last year
Β·
Also would it be possible to use any of the llama cpp moddels on GPU?
L
1 comment
Share
Open in Discord
L
Logan M
last year
llama.cpp needs to be compiled for your GPU -- see full instructions on their readme
https://github.com/abetlen/llama-cpp-python
Then, you can set
num_gpu_layers
to something other than zero to utilize the GPU.
-1
will offload all layers to your GPU (this assumes you have enough VRAM though)
https://gpt-index.readthedocs.io/en/stable/examples/llm/llama_2_llama_cpp.html#setup-llm
Add a reply
Sign up and join the conversation on Discord
Join on Discord