2

I am trying to train a deeplabv3_resnet50 model on a custom dataset, but get the error ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1]) when trying to do the forward pass. The following minimal example produces this error:

import torch
import torchvision

model = torchvision.models.segmentation.deeplabv3_resnet50(weights="DEFAULT")
model.train()

batch_size = 1
nbr_of_channels = 3
img_height, img_width = (500, 500)
input = torch.rand((batch_size, nbr_of_channels, img_height, img_width))
model(input)

I do not understand this at all. What is meant by got input size torch.Size([1, 256, 1, 1]), and what should I do differently?

Shai
  • 111,146
  • 38
  • 238
  • 371

1 Answers1

2

The error you get is from a deep BatchNorm layer: deep in the backbone, the feature map size is reduced to 1x1 pixels. As a result, BatchNorm cannot compute std of the feature map when batch size is only one.

For any batch_size > 1 this will work:

batch_size = 2  # Need bigger batches for training
nbr_of_channels = 3
img_height, img_width = (500, 500)
input = torch.rand((batch_size, nbr_of_channels, img_height, img_width))
model(input)  # working with batch_size > 1
Shai
  • 111,146
  • 38
  • 238
  • 371