0

I'm using the Conv2D method of Keras. In the documentation it is written that

padding: one of "valid" or "same" (case-insensitive). Note that "same" is slightly inconsistent across backends with strides != 1, as described here

As input I have images of size (64,80,1) and I'm using kernel of size 3x3. Does that mean that the padding is wrong when using Conv2D(32, 3, strides=2, padding='same')(input)?

How can I fix it using ZeroPadding2D?

machinery
  • 5,972
  • 12
  • 67
  • 118
  • 1
    There is no such thing as *wrong padding*. There are only paddings that suite your needs, and paddings that don't. To get an answer you will need to specify what you want to achieve. Also, just printing the model using `summary()` will give you a good idea how different paddings and strides influence the output shape of your layer. – sebrockm Jul 23 '19 at 22:05
  • @sebrockm What do you mean by what I want to achieve? I think without padding convolutions are not working or it results in a smaller resulting window after convolution. I just want to use padding so that the result stays the same size (to decrease the size I use strides). – machinery Jul 24 '19 at 12:52
  • *I think without padding convolutions are not working*. No, they will work with or without padding, just slightly differently (because the output shape will be slightly different depending on the padding). Which padding works best for you is just another hyperparameter of your network that you will need to experiment with. – sebrockm Jul 24 '19 at 13:09

1 Answers1

1

Based on your comment and seeing that you defined a stride of 2, I believe what you want to achieve is an output size that's exactly half of the input size, i.e. output_shape == (32, 40, 32) (the second 32 is the features).

In that case, just call model.summary() on the final model and you will see if that is the case or not.

If it is, there's nothing else to do. If it's bigger than you want, you can add a Cropping2D layer to cut off pixels from the borders of the image. If it's smaller than you want, you can add a ZeroPadding2D layer to add zero-pixels to the borders of the image.

The syntax to create these layers is

Cropping2D(cropping=((a, b), (c, d)))
ZeroPadding2D(padding=((a, b), (c, d)))
  • a: number of rows you want to add/cut off to/from the top
  • b: number of rows you want to add/cut off to/from the bottom
  • c: number of columns you want to add/cut off to/from the left
  • d: number of columns you want to add/cut off to/from the right

Note however, that there is no strict technical need to always perfectly half the size with each convolution layer. Your model might work well without any padding or cropping. You will have to experiment with it in order to find out.

sebrockm
  • 5,733
  • 2
  • 16
  • 39
  • Yes, I would like to achieve an output size that is half of the input size. I investigated it using model.summary(). I have to exactly add or cut off one row or column. Is it best to add | cut off columns or rows and at the top or bottom? – machinery Jul 25 '19 at 15:58
  • When I would use padding "same" (i.e. automatic padding) instead of manually crop or pad, I assume that then also just one row or column would be added or cropped. Is this true or is with automatic padding the whole image corrupted or shifted (or other weird behavior)? Would this be a huge difference compared to manual cropping or padding? I mean, if automatically there will be added a column at the top and manually I would add a column add the bottom, will these make a huge difference? – machinery Jul 25 '19 at 16:02
  • @machinery removing one line of pixels, no matter where, shouldn't have any noticeable impact on the model's performance. But again, you will have to try. The idea behind `"same"` padding is to add zeros *before* the convolution, so that afterwards it has the desired (aka. same) size. But apparently, for `stride!=1` there is no consensus among the backends what the meaning of `"same"` is. – sebrockm Jul 25 '19 at 20:24