1

I am defining a Lambda layer with a function that uses the Conv2D layer.

def lambda_func(x,k):
    y = Conv2D(k, (3,3), padding='same')(x)
    return y

And calling it using

k = 64
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = Lambda(lambda_func, arguments={'k':k}, name = 'block1_conv1_loc')(x)

But in model.summary(), the lambda layer is showing no parameters!

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv1_loc (Lambda)    (None, 224, 224, 64)      0         
_________________________________________________________________
activation_1 (Activation)    (None, 224, 224, 64)      0         
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
flatten (Flatten)            (None, 802816)            0         
_________________________________________________________________

(There is a Dense layer under it, and a Softmax 2-class classifier under that). How can I ensure the Conv2D parameters of the Lambda layer show up and are also trainable? I have also tried using trainable=True in the Lambda function.

def lambda_func(x,k):
    y = Conv2D(k, (3,3), padding='same', trainable=True)(x)
    return y

But that did not make any difference.

Prabaha
  • 879
  • 2
  • 9
  • 19
  • how exactly are you calling `summary()`... on what model? – DarkCygnus Jul 06 '17 at 17:17
  • I'm using standard procedure. `model = my_model(weights_path='weights.h5')` where I defined `my_model` with the `Model` API. Then I called `model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy'])` to compile the model, and then `model.summary()` to look at its structure – Prabaha Jul 06 '17 at 17:59

1 Answers1

4

Lambda layers don't have parameters.

Parameters, in the summary, are the variables that can "learn". Lambda layers never learn, they're functions created by you.

If you do intend to use a "Convolutional Layer", use it outside of the lambda layer.
Now, if you want to use a "convolution operation", then use it inside the lambda layer, but there is no learnable parameter, you define the filters yourself.

If you want to create a special layer that learns in a different way, then create a custom layer.

Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • I actually wish to feed in only part of the previous layer to a new convolution layer, [like this question I posted](https://stackoverflow.com/questions/44809247/keras-feeding-in-part-of-previous-layer-to-next-layer-in-cnn). I tried using a Lambda layer to make that work. How can I define a custom layer that just takes in a part of the previous layer's output, like say the output of only 1 kernel, instead of all of them? – Prabaha Jul 06 '17 at 18:03
  • 1
    As suggested by @Daniel, the solution can be indeed be reached by using the `Conv2D` layer outside the `Lambda` layer, like in [this answer](https://stackoverflow.com/questions/44809247/keras-feeding-in-part-of-previous-layer-to-next-layer-in-cnn/44960774#44960774). – Prabaha Jul 07 '17 at 18:20