0

I have read many papers where convolutional neuronal networks are used for super-resolution or for image segmentation or autoencoder and so on. They use different kinds of upsampling aka deconvolutions and a discussion over here in a different question. Here in Tensorflow there is a function Here in Keras there are some

I implemented the Keras one:

 x = tf.keras.layers.UpSampling1D(size=2)(x)

and I used this one stolen from an super-resolution repo here:

class SubPixel1D(tf.keras.layers.Layer):
  def __init__(self, r):
      super(SubPixel1D, self).__init__()
      self.r = r

  def call(self, inputs):
      with tf.name_scope('subpixel'):
          X = tf.transpose(inputs, [2,1,0]) # (r, w, b)
          X = tf.compat.v1.batch_to_space_nd(X, [self.r], [[0,0]]) # (1, r*w, b)
          X = tf.transpose(X, [2,1,0])
      return X

But I realized that both don't have parameters in my model summary. Is this not necessary for those functions to have parameters so they can learn the upsampling??

Khan
  • 1,418
  • 1
  • 25
  • 49

2 Answers2

1

In Keras Upsampling simply copies your input to the size provided. you can find the documentation here, So there is no need to have parameters for these layers.

I think you have confused upsampling with Transposed Convolution/ Deconvolution.

  • 1
    i was not confused by transposed and deconvolution. But yeah, both have trainable parameters and the upsamplig not. I think the difference is that if you use usampling, there should be a following convolution layer, so the convolution layer learns the upsampling. – Khan May 02 '20 at 11:23
1

In UpSampling1D, if you look at the actual source code on github, the up-sampling involved is either nearest neighbor or bi-linear. And both these interpolation schemes have no learning parameters, like any weight or biases, unless and until they are followed by a convolution layer. Since in Subpixel1D also no convolution layer or learnable layers is used, hence no training parameters