0

I have a model that I am trying to train on a dataset which has a class imbalance. The problem is a multilabel classification problem (each sample has 1 or more labels). I also have weights for each class which I have calculated for my dataset. I did see this implementation: BCEWithLogitsLoss in Keras

This is the equivalent in pytorch:

criterion = nn.BCEWithLogitsLoss(pos_weight=trainset.labels_weights.to(DEVICE))

so I tried passing this to my model:

def get_weighted_loss(weights):
    def weighted_loss(y_true, y_pred):
        xent = tf.compat.v2.losses.BinaryCrossentropy(from_logits=False, reduction=tf.compat.v2.keras.losses.Reduction.NONE)
        weighted_loss = tf.reduce_mean(xent(y_true, y_pred) * weights)
    return weighted_loss

and compiling the model as so:

model.compile(optimizer=optim, loss=get_weighted_loss(list(train_generatorLat.labels_weights.values())), metrics=[full_multi_label_metric])

where list(train_generatorLat.labels_weights.values()) is a list of floats (weights) for each of the classes ranging from 1.0 to 5.0 where a weight of 1 is given to labels with the most examples and 5.0 to labels with the least examples

but I get the following error:

AttributeError                            Traceback (most recent call last)
<ipython-input-108-98496152ec7d> in <module>
----> 1 model.compile(optimizer=optim, loss=get_weighted_loss(list(train_generatorLat.labels_weights.values())), metrics=[full_multi_label_metric])
      2 model.summary()

/gpfs/ysm/project/kl533/conda_envs/dlnn/lib/python3.6/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
    340                 with K.name_scope(self.output_names[i] + '_loss'):
    341                     output_loss = weighted_loss(y_true, y_pred,
--> 342                                                 sample_weight, mask)
    343                 if len(self.outputs) > 1:
    344                     self.metrics_tensors.append(output_loss)

/gpfs/ysm/project/kl533/conda_envs/dlnn/lib/python3.6/site-packages/keras/engine/training_utils.py in weighted(y_true, y_pred, weights, mask)
    415         if weights is not None:
    416             # reduce score_array to same ndim as weight array
--> 417             ndim = K.ndim(score_array)
    418             weight_ndim = K.ndim(weights)
    419             score_array = K.mean(score_array,

/gpfs/ysm/project/kl533/conda_envs/dlnn/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
    617     ```
    618     """
--> 619     dims = x.get_shape()._dims
    620     if dims is not None:
    621         return len(dims)

AttributeError: 'NoneType' object has no attribute 'get_shape'

Any ideas on how I would go about doing this?

Kevin
  • 3,077
  • 6
  • 31
  • 77

1 Answers1

0

Last layer should have a 'sigmoid' activation.

In compile your loss should be loss='binary_crossentropy'.

In fit or fit_generator you will pass class_weight=dictionary_of_weights.

Where dictionary_of_weights is something like:

dictionary_of_weights = { 0: weight0,
                          1: weight1, 
                          ....
                          n: weightN }

being n+1 the number of classes.

Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • Okay, so this will handle the class imbalance? How will the model reduce the loss? – Kevin Jan 09 '20 at 18:27
  • Yes, that's why you pass `class_weight`. – Daniel Möller Jan 09 '20 at 18:27
  • I see, this is what I suspected, and implemented it as such, but it seems to just overfit to the training set. I am actually trying to replicate a study on chest X-rays where they also calculated class weights but for some reason clipped the weights from [1,5]. They apply the weights as positional BCEWithLogitsLoss in pytorch, but i am not sure this would cause a difference? – Kevin Jan 09 '20 at 18:40