1

I have been trying to stack Convolutional neural networks with GRUs for an image to text problem. Here's my model :

model=Sequential()

model.add(TimeDistributed(Conv2D(16,kernel_size 
(3,3),data_format="channels_last",input_shape= 
(129,80,564,3),padding='SAME',strides=(1,1))))
model.add(TimeDistributed(Activation("relu")))
model.add(TimeDistributed(Conv2D(16,kernel_size =(3,3),strides=(1,1))))
model.add(TimeDistributed(Activation("relu")))
model.add(TimeDistributed(MaxPooling2D(pool_size=2,strides=(1,1) )))
model.add(TimeDistributed(Reshape((280*38*16,))))
model.add(TimeDistributed(Dense(32)))
model.add(GRU(512))
model.add(Dense(50))
model.add(Activation("softmax"))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics= 
 ['accuracy'])

When I try to fit my model I get the following error :

-------------------------------------------------------------------------
ValueError                                Traceback (most recent call 
last)
<ipython-input-125-c6a3c418689c> in <module>()
  1 nb_epoch = 100
 ----> 2 model.fit(X2,L2, epochs=100)

 10 frames
 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/nn_ops.py 
 in _get_sequence(value, n, channel_index, name)
 71   else:
 72     raise ValueError("{} should be of length 1, {} or {} but was 
 {}".format(
 ---> 73         name, n, n + 2, current_n))
 74 
 75   if channel_index == 1:

 ValueError: strides should be of length 1, 1 or 3 but was 2

I cannot even begin to wrap my head around why this message appears.I have specified the "strides" parameters for all layers. Any help will be deeply appreciated.

P.S - I did not have any problems when I tried to fit a model without TimeDistributed layers. Maybe there is something to do with this that raises this error.

lazypanda
  • 21
  • 1
  • 3

1 Answers1

0

You have made several mistakes in your code.

  • In the first layer you should specify the input_shape of the TimeDistributed layer, not Conv2D layer.
  • MaxPooling2D is used for down-sampling the images spatial size. but with strides=(1,1) the image size will remain same and not be reduced.
  • Using padding='SAME' in the first layer will add zero-padding while doing convolution and will result in a shape mismatch error in the Reshape layer. Instead you can use Flatten layer.
  • Default value of stride in a Conv2D is strides=(1,1), so, it's optional to mention.

Finally, the working code should be something following:

model=keras.models.Sequential()
model.add(keras.layers.TimeDistributed(keras.layers.Conv2D(16, kernel_size=(3,3), data_format="channels_last"),input_shape=(129,80,564,3)))
model.add(keras.layers.TimeDistributed(keras.layers.Activation("relu")))
model.add(keras.layers.TimeDistributed(keras.layers.Conv2D(16, kernel_size =(3,3))))
model.add(keras.layers.TimeDistributed(keras.layers.Activation("relu")))
model.add(keras.layers.TimeDistributed(keras.layers.MaxPooling2D(pool_size=2)))
# model.add(keras.layers.TimeDistributed(keras.layers.Flatten()))
model.add(keras.layers.TimeDistributed(keras.layers.Reshape((280*38*16,))))
model.add(keras.layers.TimeDistributed(keras.layers.Dense(32)))
model.add(keras.layers.GRU(512))
model.add(keras.layers.Dense(50))
model.add(keras.layers.Activation("softmax"))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics= ['accuracy'])
Kaushik Roy
  • 1,627
  • 2
  • 11
  • 13