3

...and if so under what circumstances?

A Convolutional Layer usually yields an output of lesser size. Is it possible to reverse/invert such an operation by flipping/transposing the used kernel and providing padding or likewise?

Just looking at the convolutional layer's operation here - without pooling layers, concatenation, non-linear activation functions etc.

I'm not looking for any of the several trainable versions of reverse convolutional operations. Such can be achieved by strides $\geq 1$ in the output space or intrinsic padding in the input space for example. Vincent Dumoulin and Francesco Visin provide very elucidating, animated gifs on their github page. And the Deep Learning community is divided over the naming of these operations: Transpose convolution, fractionally strided convolution and deconvolution are all used (the latter, although widely used, is very misleading since it's no proper mathematical deconvolution).

Dave
  • 63
  • 7

2 Answers2

2

I believe this is where the difference between a transposed convolution and a deconvolution is essential.

A deconvolution is the mathematical inverse of what a convolution does, whereas a transposed convolution reverses only the spatial transformation between input and output. Meaning, if you want to reverse the changes concerning the shape of the output, a transposed convolution will do the job, but it will not be the mathematical inverse concerning the values it produces. I wrote a few words about this topic in an article.

pietz
  • 2,093
  • 1
  • 21
  • 23
0

Well the deconvolution is defined very clearly. In convo, you multiply the map with the input frame like vector miltiplication, summ it and ASSIGN the value to output.
In deconvo, you take the output value (one), multiplie it with the map, quasi highlighting the points what influated the output and ADD it to former input layer, (what of course must be filled with zeros in the begin. This must give the same "input" layer as in forward pass.

user184868
  • 193
  • 9