9

In this pytorch ResNet code example they define downsample as variable in line 44. and line 58 use it as function. How this downsample work here as CNN point of view and as python Code point of view.

code example : pytorch ResNet

i searched for if downsample is any pytorch inbuilt function. but it is not.

class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, norm_layer=None):
        super(BasicBlock, self).__init__()
        if norm_layer is None:
            norm_layer = nn.BatchNorm2d
        if groups != 1:
            raise ValueError('BasicBlock only supports groups=1')
        # Both self.conv1 and self.downsample layers downsample the input when stride != 1
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = norm_layer(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes, planes)
        self.bn2 = norm_layer(planes)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        identity = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        if self.downsample is not None:
            identity = self.downsample(x)

        out += identity
        out = self.relu(out)

return out
Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
GIRISH kuniyal
  • 740
  • 1
  • 5
  • 14

4 Answers4

7

If you look into the original ResNet Paper (http://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf) they use strided convolutions to downsample the image. The main path is downsampled automatically using these strided convolutions as is done in your code. The residual path uses either (a) identity mapping with zero entries added to add no additional parameters or (b) a 1x1 convolution with the same stride parameter.

The second option could look like follows:

if downsample:
    self.downsample = conv1x1(inplanes, planes, strides)
Thomas Pinetz
  • 6,948
  • 2
  • 27
  • 46
  • Hi Thomas, Thanks for your answer. i understand (b) using 1x1 to get the required dimension to pass in residual connection as input whereas I could not quite get it about (a) that how identity mapping with zero entries. Can you please clarify or point out some resource that helpful to understand option (a) – nagapavankumar kalepu Aug 22 '20 at 08:13
2

In this ResNet example, Here when we define BasicBlock class we pass downsample as constructor parameter.

def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, norm_layer=None):

if we pass nothing to class then downsample = None , as result identity will not changed.

When we pass downsample = "some convolution layer" as class constructor argument, It will downsample the identity via passed convolution layer to sucessfully perform addition. this layer will downsample the identity through code as mentioned

  if self.downsample is not None:
        identity = self.downsample(x)
GIRISH kuniyal
  • 740
  • 1
  • 5
  • 14
-2

In addition to what Thomas Pinetz said :

In resnet-50 architecture, this is happening as a downsampling step:

downsample = nn.Sequential(conv1x1(self.inplanes, planes * block.expansion, stride),norm_layer(planes * block.expansion))

Note - 1*1 convolution and batch normalization

Eray Balkanli
  • 7,752
  • 11
  • 48
  • 82
-3

I believe in this context it can be average pooling or max pooling. They both reduce the dimensionality and preserve most of the properties of the input.

Jindřich
  • 10,270
  • 2
  • 23
  • 44