0

I am trying to replicate Tim O'shea's RadioML on python3.5 before I play around with it and have been making edits to his publicly available code: https://github.com/radioML/examples/blob/master/modulation_recognition/RML2016.10a_VTCNN2_example.ipynb

On page [5] I have changed the line "model.add(ZeroPadding2D((0, 2)))" to "model.add(keras.layers.ZeroPadding2D(padding=(0, 0)))", as im using keras 2 not keras 1.2 like in his python journal, getting the output shape during model.summary() of (None, 1, 2, 128), where the example output shows the shape I should be getting is (None, 1, 2, 132). This is reducing all subsequent output shapes since it is a sequential model, and reducing my ultimate # of parameters slightly. I've poured over the Keras 2 documentation and tried a few fixes, but can't see how I can change this 4th index output shape at all, much less to 132 without changing the Reshape layer that feeds into it output size to 132, but it is supposed to remain 1,2,128.

Please be gentle I'm new to NN! :)

2 Answers2

0

The padding layer adds a '0' row/column to either side of it's input tensor. You changed the padding layer from (0,2) to (0,0), so it went from padding 2 cells on either side (4 total) to padding 0 cells on either side (0 total).

The input to that layer is of shape (None,1,2,128) so when you pad it by 0 cells, you don't change it at all, and still have a tensor of shape (None, 1, 2, 128+0 = 128). The code of the github page you linked has a padding layer with the argument (0,2), meaning it's padding 2 cells to either side of its input tensor, resulting in a tensor of size (None, 1, 2, 128+2+2 = 132).

If you want to keep the same dimensions (None, 1, 2, 132), you will have to either pad the image (by passing '(0,2)' to the padding layer, as is done on the github) or perform some other operation before the 'conv1' layer that will widen your tensor by 4 cells

  • When I add the Zeropadding2D layer to the sequential model model.add(keras.layers.ZeroPadding2D(padding=(0, 2))) I get an output shape in the model.summary() of (ZeroPaddin (None, 1, 6, 128). What you explain makes sense and I thank you, but it is not yielding me the right shape size for reasons unknown. In my desperation, in fact, no padding values I've entered can change that 128 value to anything else but 128. – Kyle McClintick Jan 24 '18 at 22:56
  • by adding data_format="channels_first" as a parameter I am now getting the right layer shapes, but I'm not sure why, because for channels_last (default) or channels first, the ordering of height and width of the padding is the same: height first then width. If they are the only inputs....why does this change the ZP layers output shape from (none, 1, 6, 128) to (none, 1, 2, 132)? – Kyle McClintick Jan 25 '18 at 02:00
  • @KyleMcClintick: This is because `channels_first` indicates that your input data shape is `(nExamples, channels, height, width)` and `channels_last` indicates the shape to be `(nExamples, height, width, channels)`, where the channels refers to RGB channels or in your case is single channel. Then the two numbers in padding=(0, 2) tells Keras to pad 0 pixels on height and 2 on both left and right side (in width), and as you was using default `channels_last` while your data is actually `channels_first` so it falsely padding in this order: (nExamples, height, width, channels). – Katherine Chen Jun 07 '19 at 02:17
0

This problem had been resolved by @Kyle McClintick as mentions in comments by replacing this code

 model.add(ZeroPadding2D((0,2)))

with this

 model.add(ZeroPadding2D(padding=(0,2),data_format="channels_first"))