If Y_pred
is very far off from Y
, the Loss value will be very high. However, if both values are almost similar, the Loss value will be very low. Hence we need to keep a loss function which can penalize a model effectively while it is training on a dataset.
When a neural network is trying to predict a discrete value, we can consider it to be a classification model. This could be a network trying to predict what kind of animal is present in an image, or whether an email is a spam or not.
Questions tagged [loss-function]
1727 questions
0
votes
0 answers
Doing regression with Caffe using EuclideanLoss layer
I'm using the Caffenet model. I did a classification task with 9 classes successfully. Then I tried to change it to a regression network by preparing another LMDB file with labels ranging from 700 to 1400. I changed the original training code and…

behnam
- 11
- 2
0
votes
1 answer
How does xgboost split root node and question for Taylor expansion
I know xgboost use Gain = Score(L)+Score(R)-Score(L+R) to split node, but how does xgboost split root node? Also, why not use the fourth or fifth derivative in Taylor expansion for loss function?

tiezhuetc
- 41
- 1
- 7
0
votes
0 answers
Keras in_top_k loss
I'm using Keras (Tensorflow backend) to build a model that given an input predicts a single class (out of 64 classes), a multiclass model. Given the pretty large number of classes, I do not want to use the categorical_crossentropy or the…

Daniel Juravski
- 181
- 1
- 2
- 12
0
votes
0 answers
How to output the loss gradient backpropagation path through a PyTorch computational graph
I have implemented a new loss function in PyTorch.
#model_1 needs to be trained
outputs = model_1(input)
loss = myloss(outputs,labels)
#output is how much to resize an image
#label give the image file index
#Below I am explaining myloss()…

MSD Paul
- 1,648
- 3
- 13
- 31
0
votes
0 answers
Pytorch Autoencoder - How to improve loss?
I've a UNET style autoencoder below, with a filter I wrote in Pytorch at the end. The network seems to be converging faster than it should and I don't know why. I have a dataset of 4000 images and I'm taking a 128x128 crop every time. I'm employing…

Bled Clement
- 170
- 2
- 9
0
votes
0 answers
builtin_function_or_method' object has no attribute 'size'
optimizer = optim.SGD(model.parameters(), lr = lr)
criterion = nn.MSELoss()
valid_loss_min = np.Inf
def train(model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (img_data32,img_data64, target) in…

Vahid Zarghami
- 1
- 3
0
votes
1 answer
Understanding CNN training results
I would appreciate your explanation on following :)
I trained CNN network that classifies TWO image classes.
I used 'SGD' optimizer and 'categorical_crossentropy' loss function.
My results are as follows:
- training loss = 0.28
- training accuracy =…

Ivan Geek
- 1
- 1
- 1
0
votes
1 answer
validation loss decreases dramatically in CNN, model fit or overfit?
I have problem with my CNN model.
I have 89 original fundus image with 5 images of a normal class, and 84 images un-normal class. Then, I augmented the normal class with OpenCV, so I have 85 images of normal class and 84 un-normal class.
I trained…

hilyap
- 1
- 2
0
votes
0 answers
Python Keras - Accuracy drops to zero
I have a problem, when training a U-Net, which has many similarities with a CNN, in Keras with Tensorflow. When starting the Training, the Accuracy increases and the loss steadily goes down. At around epoch 40, in my example, the validation loss…

Jacob
- 1
- 1
0
votes
1 answer
How to compile a keras model that has 2 outputs with a custom loss that takes 3 parameters?
I am trying to compile a model with 2 outputs using a custom loss function but I am failing at doing so.
Any ideas? Let me show you what I have done,
Here is the loss function:
def contrastive_loss(y_true, y_pred1, y_pred2):
'''Contrastive loss…

mj1261829
- 1,200
- 3
- 26
- 53
0
votes
1 answer
Difference between Logistic Loss and Cross Entropy Loss
I'm confused about logistic loss and cross entropy loss in binary classification scenario.
According to Wikipedia (https://en.wikipedia.org/wiki/Loss_functions_for_classification), the logistic loss is defined as:
where v=y*y_hat
The cross entropy…

YQ.Wang
- 1,090
- 1
- 17
- 43
0
votes
1 answer
PyTorch: Simple feedforward neural network not running without retain_graph=True
Below is my code for training a Feedforward neural network (FFNN).
The labels are numbers between 0 and 50. The FFNN comprises of a single hidden layer with 50 neurons and an output layer with 51 neurons. Furthermore, I have used negative log…

CS1999
- 23
- 5
0
votes
0 answers
Cross entropy loss returns infinity in neural network
I am implementing a neural network and using a cross-entropy loss function.
Cross entropy error is given by:
error = - np.sum((actual_Y * np.log2 (Y_pred)) + ((1-actual_Y)* np.log2 (1 - Y_pred)))
after few iterations (1- Y_pred) inside…

Irfan Umar
- 196
- 3
- 18
0
votes
1 answer
how important is the loss difference between training and validation data at the beginning when training a neuronal network?
Short question:
Is the difference between validation and training loss at the beginning of the training (first epochs) a good indicator for the amount of data that should be used?
E.g would it be a good method to increase the amount of data until…

Khan
- 1,418
- 1
- 25
- 49
0
votes
1 answer
SSD Resnet 50 FPN Loss function clarification
I am using tensorflow object detection api on my dataset. I am using ssd-resnet50-fpn model. While training, I see that classification loss and localization loss has converged but the total loss is still decreasing. Also total loss is not coming out…

Aashish Kumar
- 21
- 2