Find answers from the community

Updated last year

@Logan M is there any way of triggering

At a glance

The community member is trying to use GPU for embedding finetuning in the llamaindex library, but it seems to be using CPU only. They share the code they are running, and another community member suggests that it should use GPU automatically. However, when the community member checks if GPU is available, it prints False. After installing torch, the issue is resolved, and the community member is able to use GPU for embedding calculations, even though it was not working initially in the same environment.

@Logan M is there any way of triggering the use of GPU during embedding finetuning? I'm running the finetuning activity from the llamaindex embedding documentation and it seems to use CPU only. I can't seem to find a way to make it use the GPU
J
L
5 comments
this is what im running:

from llama_index.finetuning import (
generate_qa_embedding_pairs,
EmbeddingQAFinetuneDataset,
)
from llama_index.finetuning import SentenceTransformersFinetuneEngine


train_dataset = EmbeddingQAFinetuneDataset.from_json("new_dataset.json")
val_dataset = EmbeddingQAFinetuneDataset.from_json("val_dataset.json")


finetune_engine = SentenceTransformersFinetuneEngine(
train_dataset,
model_id="sentence-transformers/all-mpnet-base-v2",
model_output_path="mpnet_finetuned_v2",
batch_size=16,
val_dataset=val_dataset,
show_progress_bar=True
)

finetune_engine.finetune()
hmmm it should be using gpu automatically, at least from my understanding
Plain Text
import torch
print(torch.cuda.is_available())
Does that print True?
hey, sorry for the late reply, so it was printing False and once I installed torch it started working; the weird thing is that in the same environment I'm using GPU for embedding calculations; anyways, it works so I'm happy!

thank you again! πŸ™
Add a reply
Sign up and join the conversation on Discord