30

I have the following code in Keras (Basically I am modifying this code for my use) and I get this error:

'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)'

Code:

from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers

# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.

model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(None, 64, 64, 1),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
               activation='sigmoid',
               padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')

the data I feed is in the following format: [1, 10, 64, 64, 1]. So I would like to know where I am wrong and also how to see the output_shape of each layer.

MRM
  • 1,099
  • 2
  • 12
  • 29

2 Answers2

39

You can get the output shape of a layer by layer.output_shape.

for layer in model.layers:
    print(layer.output_shape)

Gives you:

(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)

Alternatively you can pretty print the model using model.summary:

model.summary()

Gives you the details about the number of parameters and output shapes of each layer and an overall model structure in a pretty format:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 64, 64, 40)  59200     
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv3d_1 (Conv3D)            (None, None, 64, 64, 1)   1081      
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________

If you want to access information about a specific layer only, you can use name argument when constructing that layer and then call like this:

...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...

model.get_layer('conv3d_0')

EDIT: For reference sake it will always be same as layer.output_shape and please don't actually use Lambda or custom layers for this. But you can use Lambda layer to echo the shape of a passing tensor.

...
def print_tensor_shape(x):
    print(x.shape)
    return x
model.add(Lambda(print_tensor_shape))
...

Or write a custom layer and print the shape of the tensor on call().

class echo_layer(Layer):
...
    def call(self, x):
        print(x.shape)
        return x
...

model.add(echo_layer())
umutto
  • 7,460
  • 4
  • 43
  • 53
  • I know the 'model.summary()" and 'layer.output_shape', actually I meant after feeding the data I would like to see the output shape, in other words, I do not know why I get the mentioned error: 'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)' – MRM Mar 28 '18 at 06:12
  • I have tried to run the model with your inputs. `model.fit(np.ones((1, 10, 64, 64, 1)), np.ones((1, 1, 64, 64, 1)))` worked for me. What does your `y` look like? – umutto Mar 28 '18 at 06:14
  • my 'y' shape is the same as 'x' shape. Basically, I am using "Moving mnist" dataset where each sequence is 20 frames and I am trying to predict the second 10 frames based on the first 10 frames. – MRM Mar 28 '18 at 06:19
  • 1
    @MaryamRahmaniMoghaddam Can you check the shape of your `y` again, I can't reproduce the error while fitting `x.shape = y.shape = (1, 10, 64, 64, 1)`. By looking at the error (10, 4096). It looks like your inputs or `y` is somehow flattened. If it is your `y` you may need to reshape or change the output shape of your model. – umutto Mar 28 '18 at 06:28
  • 1
    you are right. my 'y' was not in the right format. but now I am getting this error "AttributeError: 'ProgbarLogger' object has no attribute 'log_values'". do you also get this error? – MRM Mar 28 '18 at 06:40
  • 1
    @MaryamRahmaniMoghaddam no I don't but seems like errors related to progress bar is usually something to do with train/test splits and validation arguments on fit (which I didn't give). Check [this github thread](https://github.com/keras-team/keras/issues/3657) for similar errors. – umutto Mar 28 '18 at 06:43
  • When I try this I get `AttributeError: 'Tensor' object has no attribute 'output_shape'`. e.g. after running `top = Concatenate()([layer1, layer2]) print(top.output_shape)` (using Keras 2.2) – sh37211 Sep 30 '18 at 15:44
  • @umotto what is the meaning of Param # ? – Vincent Jun 16 '19 at 05:16
  • 1
    @Vincent **Param #** column represents the weights and other adjustable (during the training with backprop) parameters for that layer. For example, number of parameters in a simple dense layer would be calculated as `params = weights = output_size * (input_size + 1)` where +1 is the bias. `ConvLSTM2D` layers are a bit more complicated to calculate. Also sometimes these parameters are static or in this case (`BatchNormalization` layers) not updated with the backprop training but with statistical methods using variance and mean, thus the 320 non-trainable params below. – umutto Jun 16 '19 at 07:51
0

You can get the output shape with 'output.shape[1:]' command. It will get the shape of output layer and can be used for other purposes.

Pouyan
  • 31
  • 5