I've read the paper Visualizing and Understanding Convolutional Networks by Zeiler and Fergus and would like to make use of their visualization technique. The paper sounds promising - but unfortunately, I have no idea how to implement it in Keras (version 1.2.2).
Two questions:
Keras only provides the
Deconvolution2D
Layer but noUnpooling
and no "reverse ReLU" Layer. How can I make use of those switch variables mentioned in the paper in order to implement the unpooling? How do I have to use the reverse ReLU (or is it just the "normal"ReLU
)?Keras
Deconvolution2D
layer has the attributesactivation
andsubsample
. Maybe those are the key for solving my problem?! If yes, I would have to replace all my combination of LayersConvolution2D
+Activation
+Pooling
with a singleDeconvolution2D
Layer, right?
I appreciate your help!