0

I am currently building a CNN that does binary classification, I first do feature extraction using wavelet transform then pass that output to the model. But I'm getting the below error constantly.

train_labels shape: (660,) (labels)

train_data shape: (660, 12) where (num of samples, features)

I've tried:

  1. add a new dimension to the dataset using np.newaxis but it produces cardinality errors

  2. Data cardinality is ambiguous: x sizes: 1 y sizes: 660; i reshape the labels then but that's inefficient since then the model maps to 660 classes instead of 2.

    ValueError: in user code:
    
     File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function  *
         return step_function(self, iterator)
     File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function  **
         outputs = model.distribute_strategy.run(run_step, args=(data,))
     File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step  **
         outputs = model.train_step(data)
     File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
         y_pred = self(x, training=True)
     File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
         raise e.with_traceback(filtered_tb) from None
     File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
         raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
    
     ValueError: Input 0 of layer "sequential_52" is incompatible with the layer: expected shape=(None, 660, 12), found shape=(None, 12)
    

My code:

  model = Sequential()
  model.add(Conv1D((16), (1), input_shape= (660, 12) ,name = 'Conv1')) #yes
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(Conv1D(32, (1),name = 'Conv2'))#yes
  model.add(Activation('relu'))#yes
  model.add(Dense(256, name = 'FC2'))#yes
  model.add(Activation('relu'))#yes
  model.add(Dropout(0.25))#yes
  model.add(Dropout(0.5))#yes
  model.add(Dense(1, activation = 'sigmoid'))#yes
  sgd = SGD()

  model.compile(loss='binary_crossentropy',optimizer=sgd,metrics=['accuracy'])
desertnaut
  • 57,590
  • 26
  • 140
  • 166
menna
  • 1
  • 1
    Don't specify number of sample (batch size) in the input. `input_shape` for the input layer should be (None, 12) – cao-nv May 09 '22 at 03:09
  • that raises Input 0 of layer "Conv1" is incompatible with the layer: expected min_ndim=3, found ndim=2. Full shape received: (None, 12) error – menna May 09 '22 at 15:13
  • Sorry, as in https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D, the document stated that in input shape could be something like (12, 1) or (None, 12). Could you clarify that 660 is the number of data samples or number of frames for a single data sample? – cao-nv May 10 '22 at 07:44
  • i tried (12, 1) and it worked, thank you!! and 660 is the number of data samples in total. – menna May 16 '22 at 22:51

1 Answers1

0

I reproduced your model and used model.summary() to take a closer look at the data shape at the different layers. Are you sure you want to have the shape (None,660,1) at the output?

Model: "sequential_9"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Conv1 (Conv1D)               (None, 660, 16)           208       
_________________________________________________________________
batch_normalization_5 (Batch (None, 660, 16)           64        
_________________________________________________________________
activation_15 (Activation)   (None, 660, 16)           0         
_________________________________________________________________
Conv2 (Conv1D)               (None, 660, 32)           544       
_________________________________________________________________
activation_16 (Activation)   (None, 660, 32)           0         
_________________________________________________________________
FC2 (Dense)                  (None, 660, 256)          8448      
_________________________________________________________________
activation_17 (Activation)   (None, 660, 256)          0         
_________________________________________________________________
dropout_8 (Dropout)          (None, 660, 256)          0         
_________________________________________________________________
dropout_9 (Dropout)          (None, 660, 256)          0         
_________________________________________________________________
dense_7 (Dense)              (None, 660, 1)            257       
=================================================================
Total params: 9,521
Trainable params: 9,489
Non-trainable params: 32
_________________________________________________________________

If you want to do a one output binary classification I suggest that you use a Flatten-layer or a MaxPool1D-layer somewhere before the final layer.

bjornsing
  • 322
  • 6
  • 25