59

The Keras layer documentation specifies the input and output sizes for convolutional layers: https://keras.io/layers/convolutional/

Input shape: (samples, channels, rows, cols)

Output shape: (samples, filters, new_rows, new_cols)

And the kernel size is a spatial parameter, i.e. detemines only width and height.

So an input with c channels will yield an output with filters channels regardless of the value of c. It must therefore apply 2D convolution with a spatial height x width filter and then aggregate the results somehow for each learned filter.

What is this aggregation operator? is it a summation across channels? can I control it? I couldn't find any information on the Keras documentation.

Thanks.

yoki
  • 1,796
  • 4
  • 16
  • 27
  • 2
    You need to read [this](http://cs231n.github.io/convolutional-networks/). – Autonomous Apr 09 '17 at 22:27
  • From this page: "In the output volume, the d-th depth slice (of size W2×H2) is the result of performing a valid convolution of the d-th filter over the input volume with a stride of SS, and then offset by d-th bias. ". So I still don't follow how these convolutions of a volume with a 2D kernel turn into a 2D result. Is the depth dimension reduced by summation? – yoki Apr 10 '17 at 06:53
  • 1
    "Example 1. For example, suppose that the input volume has size [32x32x3], (e.g. an RGB CIFAR-10 image). If the receptive field (or the filter size) is 5x5, then each neuron in the Conv Layer will have weights to a [5x5x3] region in the input volume, for a total of 5*5*3 = 75 weights (and +1 bias parameter). Notice that the extent of the connectivity along the depth axis must be 3, since this is the depth of the input volume." - I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels. – Nilesh Birari Apr 10 '17 at 11:21
  • 1
    @Nilesh Birari , my question is exactly how to know what Keras is doing. I guess it's summation, but how can I know for sure? – yoki Apr 10 '17 at 11:54

3 Answers3

45

It might be confusing that it is called Conv2D layer (it was to me, which is why I came looking for this answer), because as Nilesh Birari commented:

I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels.

Perhaps the 2D stems from the fact that the kernel only slides along two dimensions, the third dimension is fixed and determined by the number of input channels (the input depth).

For a more elaborate explanation, read https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/

I plucked an illustrative image from there:

kernel depth

noio
  • 5,744
  • 7
  • 44
  • 61
  • 5
    So do each channel of the filter have their own weights which can be optimized? Or do we just compute the weights for one channel and use that as values for the rest of the channels of the filter. – Moondra Nov 20 '17 at 23:02
  • 1
    The kernels are different for all channels. See my answer. – Alaroff May 24 '18 at 07:42
  • @Regi According to remykarem's answer, your statement is wrong. – joba2ca Aug 31 '23 at 09:01
43

I was also wondering this, and found another answer here, where it is stated (emphasis mine):

Maybe the most tangible example of a multi-channel input is when you have a color image which has 3 RGB channels. Let's get it to a convolution layer with 3 input channels and 1 output channel. (...) What it does is that it calculates the convolution of each filter with its corresponding input channel (...). The stride of all channels are the same, so they output matrices with the same size. Now, it sums up all matrices and output a single matrix which is the only channel at the output of the convolution layer.

Illustration:

enter image description here

Notice that the weights of the convolution kernels for each channel are different, which are then iteratively adjusted in the back-propagation steps by e.g. gradient decent based algorithms such as stochastic gradient descent (SDG).

Here is a more technical answer from TensorFlow API.

Alaroff
  • 2,178
  • 1
  • 15
  • 9
39

I also needed to convince myself so I ran a simple example with a 3×3 RGB image.

# red    # green        # blue
1 1 1    100 100 100    10000 10000 10000
1 1 1    100 100 100    10000 10000 10000    
1 1 1    100 100 100    10000 10000 10000

The filter is initialised to ones:

1 1
1 1

enter image description here

I have also set the convolution to have these properties:

  • no padding
  • strides = 1
  • relu activation function
  • bias initialised to 0

We would expect the (aggregated) output to be:

40404 40404
40404 40404

Also, from the picture above, the no. of parameters is

3 separate filters (one for each channel) × 4 weights + 1 (bias, not shown) = 13 parameters


Here's the code.

Import modules:

import numpy as np
from keras.layers import Input, Conv2D
from keras.models import Model

Create the red, green and blue channels:

red   = np.array([1]*9).reshape((3,3))
green = np.array([100]*9).reshape((3,3))
blue  = np.array([10000]*9).reshape((3,3))

Stack the channels to form an RGB image:

img = np.stack([red, green, blue], axis=-1)
img = np.expand_dims(img, axis=0)

Create a model that just does a Conv2D convolution:

inputs = Input((3,3,3))
conv = Conv2D(filters=1, 
              strides=1, 
              padding='valid', 
              activation='relu',
              kernel_size=2, 
              kernel_initializer='ones', 
              bias_initializer='zeros', )(inputs)
model = Model(inputs,conv)

Input the image in the model:

model.predict(img)
# array([[[[40404.],
#          [40404.]],

#         [[40404.],
#          [40404.]]]], dtype=float32)

Run a summary to get the number of params:

model.summary()

enter image description here

remykarem
  • 2,251
  • 22
  • 28
  • 4
    EXCELLENT contribution – Regi Mathew May 29 '19 at 06:20
  • 6
    This is an excellent answer. Honestly I think the name conv2d is very confusing. – jelmood jasser Aug 19 '19 at 03:29
  • So, I am not the only one who started to wonder what's actually happening there and what does the underlying aggregation look like? – Stefan Falk Jul 23 '20 at 14:01
  • I find your statement "3 separate filters (one for each channel) " problematic. Conceptually, it is **one** filter that happens to span *n=number of input channels* channels. One filter therefore has *H*W*num_input_channels* parameters (not accounting for bias - in PyTorch, you have one bias per filter). When using this nomenclature, the number of filters equals the number of output channels after the convolution. – joba2ca Aug 31 '23 at 08:59