Log in
Log into community
Find answers from the community
View all posts
Related posts
Was this helpful?
π
π
π
Powered by
Hall
Inactive
Updated last year
0
Follow
looking at https://docs.llamaindex.ai/en
looking at https://docs.llamaindex.ai/en
Inactive
0
Follow
At a glance
t
thoraxe
last year
Β·
looking at
https://docs.llamaindex.ai/en/stable/examples/vector_stores/SimpleIndexDemoLlama-Local.html
if I don't
import torch
or set the
torch
kwargs, does it default to using CPU, or will it automatically use GPU regardless?
L
t
9 comments
Share
Open in Discord
L
Logan M
last year
It will use gpu automatically, if it's available π
In this case, it looks like torch was just being used to set the tensor type to 16bit (although I'm not sure if that's needed even, since load_in_8bit is also set to true)
t
thoraxe
last year
ok
t
thoraxe
last year
@ilan_pinto was having some performance issues and it wasn't clear if his system was actually accessing the GPU or not
t
thoraxe
last year
we're gonna do some troubleshooting
t
thoraxe
last year
I think he was following some of your evaluation examples and it was taking a really long time
t
thoraxe
last year
it wasn't clear which part was the hangup - embedding, generation, whatever
t
thoraxe
last year
will let you know
L
Logan M
last year
hmm, should be able to run
nvidia-smi
to check GPU usage
Evals can be slow though lol
L
Logan M
last year
Sounds good!
Add a reply
Sign up and join the conversation on Discord
Join on Discord