The community member is trying to use GPU for embedding finetuning in the llamaindex library, but it seems to be using CPU only. They share the code they are running, and another community member suggests that it should use GPU automatically. However, when the community member checks if GPU is available, it prints False. After installing torch, the issue is resolved, and the community member is able to use GPU for embedding calculations, even though it was not working initially in the same environment.
@Logan M is there any way of triggering the use of GPU during embedding finetuning? I'm running the finetuning activity from the llamaindex embedding documentation and it seems to use CPU only. I can't seem to find a way to make it use the GPU
hey, sorry for the late reply, so it was printing False and once I installed torch it started working; the weird thing is that in the same environment I'm using GPU for embedding calculations; anyways, it works so I'm happy!