4

I've read the paper Visualizing and Understanding Convolutional Networks by Zeiler and Fergus and would like to make use of their visualization technique. The paper sounds promising - but unfortunately, I have no idea how to implement it in Keras (version 1.2.2).

Two questions:

  1. Keras only provides the Deconvolution2D Layer but no Unpooling and no "reverse ReLU" Layer. How can I make use of those switch variables mentioned in the paper in order to implement the unpooling? How do I have to use the reverse ReLU (or is it just the "normal" ReLU)?

  2. Keras Deconvolution2D layer has the attributes activation and subsample. Maybe those are the key for solving my problem?! If yes, I would have to replace all my combination of Layers Convolution2D + Activation + Pooling with a single Deconvolution2D Layer, right?

I appreciate your help!

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
D.Laupheimer
  • 1,074
  • 1
  • 9
  • 21

1 Answers1

1

The authors of the paper you cite (as far as I remember) talk briefly on how to handle this, specifically:

  1. ReLU. The inverse of ReLU is... ReLU. Since convolution is applied to activation function in the forward pass, deconvolution should be applied to rectified reconstructions in the backward pass.
  2. Pooling. There is no way to invert pooling strictly speaking. To cite the paper, "we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables. In the deconvnet, the unpooling operation uses these switches to place the reconstructions from the layer above into appropriate locations, preserving the structure of the stimulus."

Now, closer to actual implementation and Keras, have a look at this thread - you will find there some examples that you can use immediately.

Lukasz Tracewski
  • 10,794
  • 3
  • 34
  • 53
  • Thanks for your answer! Unfortunately, I still don't understand how to implement this paper (I've tried your given links and also tried to make use of Devoncolution2D by trial and error. But the results are just images that are totally red (colormap = jet)). This paper would lead to some great visualizations... – D.Laupheimer Mar 10 '17 at 09:51
  • In your question you're asking how to do this - and I provided an answer. It's rather hard to comment on why all your images are red - that's implementation issue. As explained, inverse of ReLU -> ReLU, while provided link has code snippet for Unpooling. – Lukasz Tracewski Mar 10 '17 at 09:54
  • Yep, I got it. I used ReLU as activation function within the Deconvolution2D layers. Also I used subsample for doing the Unpooling. I will keep it rolling and share my solution (if I succeed). – D.Laupheimer Mar 10 '17 at 10:25