6

When I run training using fast.ai only the CPU is used even though

import torch; print(torch.cuda.is_available())

shows that CUDA is available and some memory on the GPU is occupied by my training process.

from main import DefectsImagesDataset
from fastai.vision.all import *
import numpy as np

NUM_ELEMENTS = 1e5
CSV_FILES = {
    'events_path':
        './data/events.csv',
    'defects_path':
        './data/defects2020_all.csv',
    }

defects_dataset = DefectsImagesDataset(CSV_FILES['defects_path'], CSV_FILES['events_path'], NUM_ELEMENTS, window_size=10000)
model = models.resnet34
BATCH_SIZE = 16
NUMBER_WORKERS = 8
dls = DataLoaders.from_dsets(defects_dataset, defects_dataset, bs=BATCH_SIZE, num_workers=NUMBER_WORKERS)

import torch; print(torch.cuda.is_available())

loss_func = nn.CrossEntropyLoss()
learn = cnn_learner(dls, models.resnet34, metrics=error_rate, n_out=30, loss_func=loss_func)

learn.fit_one_cycle(1)

CUDA-Version: 11.5

Fast.ai-Version: 2.5.3

How can I make fast.ai use the GPU?

Tom Dörr
  • 859
  • 10
  • 22

1 Answers1

5

I had to specify the device when creating the dataloaders. Instead of

dls = DataLoaders.from_dsets(
    defects_dataset, 
    defects_dataset, 
    bs=BATCH_SIZE, 
    num_workers=NUMBER_WORKERS)

I know have

dls = DataLoaders.from_dsets(
    defects_dataset, 
    defects_dataset, 
    bs=BATCH_SIZE, 
    num_workers=NUMBER_WORKERS, 
    device=torch.device('cuda'))
Tom Dörr
  • 859
  • 10
  • 22