0

This is a code snippet for building a cnn-lstm with pre-trained mobilenet encoder head.

inputs = Input(shape = (60, 224, 224, 3))
cnn_base = MobileNetV3Small(include_top = False, weights='imagenet', input_shape = (224, 224, 3))

cnn_out = GlobalAveragePooling2D()(cnn_base.output)
cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
cnn_out.set_shape((None, 576))
encoded_frames = TimeDistributed(cnn)(inputs)
encoded_sequence = LSTM(256)(encoded_frames)

hidden_layer = Dense(1024, activation="relu")(encoded_sequence)
outputs = Dense(50, activation="softmax")(hidden_layer)
model = Model([inputs], outputs)

I have been getting this error:

NotImplementedError: Exception encountered when calling layer "time_distributed_43" (type TimeDistributed).

Please run in eager mode or implement the `compute_output_shape` method on your layer (TFOpLambda).

Call arguments received by layer "time_distributed_43" (type TimeDistributed):
  • inputs=tf.Tensor(shape=(None, 60, 224, 224, 3), dtype=float32)
  • training=False
  • mask=None

Does anyone know any quick fix to this? I have already tried using eager execution but no luck with:

tf.compat.v1.enable_eager_execution()
momo668
  • 199
  • 3
  • 8

1 Answers1

0

This worked. The quick simple trick is:

input_layer = Inputs(...)
...
lstm_in = Lambda(lambda x: whatever_cnn_model(x))
lstm_input = TimeDistributed(lstm_in, input_shape=[sequence input shape])(input_layer)
momo668
  • 199
  • 3
  • 8