3

I am trying to execute this example notebook on modulation https://github.com/radioML/examples/blob/master/modulation_recognition/RML2016.10a_VTCNN2_example.ipynb

After executing this

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D , Reshape , ZeroPadding2D,BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping


model = Sequential()

model.add(Reshape([1]+in_shp, input_shape=in_shp))

model.add(ZeroPadding2D((0, 2), data_format="channels_first"))
model.add(Conv2D(256, (1,3), data_format="channels_first"))
model.add(Dropout(0.5))

model.add(ZeroPadding2D((0, 1), data_format="channels_first"))
model.add(Conv2D(80, (2 ,3), data_format="channels_first" , activation="relu"))

model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(len(classes) , activation='softmax'))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

I get this

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
reshape_7 (Reshape)          (None, 1, 2, 128)         0         
_________________________________________________________________
zero_padding2d_8 (ZeroPaddin (None, 1, 2, 132)         0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 256, 2, 130)       1024      
_________________________________________________________________
dropout_10 (Dropout)         (None, 256, 2, 130)       0         
_________________________________________________________________
zero_padding2d_9 (ZeroPaddin (None, 256, 2, 132)       0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 80, 1, 130)        122960    
_________________________________________________________________
dropout_11 (Dropout)         (None, 80, 1, 130)        0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 10400)             0         
_________________________________________________________________
dense_6 (Dense)              (None, 256)               2662656   
_________________________________________________________________
dropout_12 (Dropout)         (None, 256)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 11)                2827      
_________________________________________________________________
activation_3 (Activation)    (None, 11)                0         
_________________________________________________________________
reshape_8 (Reshape)          (None, 11)                0         
=================================================================
Total params: 2,789,467
Trainable params: 2,789,467
Non-trainable params: 0
_________________________________________________________________

and then when I run this

model_fit(model, X_train, Y_train, test_idx)

I am getting this error

**InvalidArgumentError:  Conv2DCustomBackpropInputOp only supports NHWC.**
     [[node Conv2DBackpropInput (defined at <ipython-input-17-9cd1191bc59a>:3) ]] [Op:__inference_distributed_function_3032]

Function call stack:
distributed_function

When I run the same code on other machines it works. So I uninstalled anaconda, Keras, TensorFlow and reinstalled everything.

inp_shp = [2, 128]
X_train.shape = (110000, 2, 128)

1 Answers1

1

NHWC stands for Num_samples x Height x Width x Channels.

You have X_train.shape = (110000, 2, 128) but what you should pass to the model should be in this shape X_train.shape = (110000, 2, 128, 1) if you have greyscale images and X_train.shape = (110000, 2, 128, 3) if they are RGB images. You inp_shape also should change as well.

aminrd
  • 4,300
  • 4
  • 23
  • 45
  • Thank you very much. So I use reshape on X_train and inp_shape? Or is there any other method to achieve what you suggested? – Yohan cyrus Mar 06 '20 at 01:24
  • Probably with something like: `X_train = np.expand_dims(X_train, axis = 3)` – aminrd Mar 06 '20 at 01:31
  • So I changed `X_train.shape = (110000, 2, 128, 1)` and `inp_shp = [2, 128, 1]` and executed. Then I got, `ValueError: Input 0 of layer zero_padding2d_17 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [None, 1, 2, 128, 1]` – Yohan cyrus Mar 06 '20 at 01:45