0

I am implementing a convolutional autoencoder and I am having serious difficulty with finding the correct shapes for the convolution_transpose layers (in the decoder). So far my encoder looks like

    ('convolution', num_outputs=256, kernel_size=48, stride=2, padding="SAME")
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution', num_outputs=256, kernel_size=32, stride=1, padding="SAME" )

Now, in the decoder I am trying to revert this. Using:

    ('convolution_transpose', num_outputs=256, kernel_size=32, stride=2, padding="SAME")
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=7, stride=1, padding="SAME" )
    ('convolution_transpose', num_outputs=256, kernel_size=48, stride=2, padding="SAME" )
    ('convolution_transpose', num_outputs=1, kernel_size=48, stride=2, padding="SAME" )

I can't reproduce the size of the input.

Input Size:  (10, 161, 1800, 1)
Output Size: (10, 3600, 1024, 1)

Any idea on what the correct settings for the decoder layer should be?

Qubix
  • 4,161
  • 7
  • 36
  • 73

1 Answers1

1

Not sure what platform you are using or what you are trying to accomplish however your input size should be divisible by your convolutional layers otherwise your input will be padded(or cropped). That aside, on tensorflow the following works:

tf.layers.conv2d(in,256,3,2,'SAME',activation=tf.nn.relu)
tf.layers.conv2d_transpose(in,256,3,2,'SAME',activation=tf.nn.relu)

Where 256 is the number of features, 3 is the kernel size(3x3) and 2 is the stride.

fezzik
  • 155
  • 1
  • 10