If Y_pred
is very far off from Y
, the Loss value will be very high. However, if both values are almost similar, the Loss value will be very low. Hence we need to keep a loss function which can penalize a model effectively while it is training on a dataset.
When a neural network is trying to predict a discrete value, we can consider it to be a classification model. This could be a network trying to predict what kind of animal is present in an image, or whether an email is a spam or not.
Questions tagged [loss-function]
1727 questions
7
votes
1 answer
Need help implementing a custom loss function in lightGBM (Zero-inflated Log Normal Loss)
Im trying to implement this zero-inflated log normal loss function based on this paper in lightGBM (https://arxiv.org/pdf/1912.07753.pdf) (page 5). But, admittedly, I just don’t know how. I don’t understand how to get the gradient and hessian of…

Negative Correlation
- 813
- 1
- 11
- 26
7
votes
1 answer
Loss with custom backward function in PyTorch - exploding loss in simple MSE example
Before working on something more complex, where I knew I would have to implement my own backward pass, I wanted to try something nice and simple. So, I tried to do linear regression with mean squared error loss using PyTorch. This went wrong (see…

Björn
- 644
- 10
- 23
7
votes
1 answer
Is there a version of sparse categorical cross entropy in pytorch?
I saw a sudoku solver CNN uses a sparse categorical cross-entropy as a loss function using the TensorFlow framework, I am wondering if there is a similar function for Pytorch? if not could how could I potentially calculate the loss of a 2d array…

Shivam Bhatt
- 101
- 1
- 1
- 6
7
votes
1 answer
Tensorflow Keras RMSE metric returns different results than my own built RMSE loss function
This is a regression problem
My custom RMSE loss:
def root_mean_squared_error_loss(y_true, y_pred):
return tf.keras.backend.sqrt(tf.keras.losses.MSE(y_true, y_pred))
Training code sample, where create_model returns a dense fully connected…

ma7555
- 362
- 5
- 17
7
votes
2 answers
RMSE loss for multi output regression problem in PyTorch
I'm training a CNN architecture to solve a regression problem using PyTorch where my output is a tensor of 20 values. I planned to use RMSE as my loss function for the model and tried to use PyTorch's nn.MSELoss() and took the square root for it…

cronin
- 83
- 1
- 1
- 6
7
votes
2 answers
How to access sample weights in a Keras custom loss function supplied by a generator?
I have a generator function that infinitely cycles over some directories of images and outputs 3-tuples of batches the form
[img1, img2], label, weight
where img1 and img2 are batch_size x M x N x 3 tensors, and label and weight are each batch_size…

ely
- 74,674
- 34
- 147
- 228
7
votes
1 answer
Higher loss penalty for true non-zero predictions
I am building a deep regression network (CNN) to predict a (1000,1) target vector from images (7,11). The target usually consists of about 90 % zeros and only 10 % non-zero values. The distribution of (non-) zero values in the targets vary from…

Lukas Hecker
- 73
- 1
- 4
7
votes
1 answer
Keras custom loss function (elastic net)
I'm try to code Elastic-Net. It's look likes:
And I want to use this loss function into Keras:
def nn_weather_model():
ip_weather = Input(shape = (30, 38, 5))
x_weather = BatchNormalization(name='weather1')(ip_weather)
x_weather =…

陳建勤
- 127
- 6
7
votes
2 answers
Why there is sudden drop in loss after every epoch?
I am using custom loss function(triplet loss) with mini-batch, during epoch the loss is gradually decreasing but just after the every epoch there is sudden drop in loss(appx. 10% of fall) and then gradually decreasing during that epoch(ignore…

Rahul Anand
- 437
- 1
- 4
- 15
7
votes
2 answers
Keras apply different weight to different misclassification
I am trying to implement a classification problem with three classes: 'A','B' and 'C', where I would like to incorporate penalty for different type of misclassification in my model loss function (kind of like weighted cross entropy). Class weight is…

bambi
- 373
- 1
- 3
- 9
7
votes
2 answers
Use TensorFlow loss Global Objectives (recall_at_precision_loss) with Keras (not metrics)
Background
I have a multi-label classification problem with 5 labels (e.g. [1 0 1 1 0]). Therefore, I want my model to improve at metrics such as fixed recall, precision-recall AUC or ROC AUC.
It doesn't make sense to use a loss function (e.g.…

NumesSanguis
- 5,832
- 6
- 41
- 76
7
votes
0 answers
Hausdorff distance loss in tensorflow
What is the most efficient way to implement a loss function that minimizes the pairwise Hausdorff distance between two batches of tensors in Tensorflow?.

HuckleberryFinn
- 1,489
- 2
- 16
- 26
7
votes
2 answers
Implementing a batch dependent loss in Keras
I have an autoencoder set up in Keras. I want to be able to weight the features of the input vector according to a predetermined 'precision' vector. This continuous valued vector has the same length as the input, and each element lies in the range…

duncster94
- 570
- 6
- 23
7
votes
2 answers
How to conditionally assign values to tensor [masking for loss function]?
I want to create a L2 loss function that ignores values (=> pixels) where the label has the value 0. The tensor batch[1] contains the labels while output is a tensor for the net output, both have a shape of (None,300,300,1).
labels_mask =…

ScientiaEtVeritas
- 5,158
- 4
- 41
- 59
7
votes
1 answer
Tensorflow - Total Variation Loss - reduce_sum vs reduce_mean?
Why does the Total Variation Loss in Tensorflow suggest to use reduce_sum instead of reduce_mean as a loss function?
This can be used as a loss-function during optimization so as to
suppress noise in images. If you have a batch of images, then…

Cypher
- 2,374
- 4
- 24
- 36