-3

I am new babie to the Deep Learning field, and I am use log-likelihood method to compare the MSE metrics.Could anyone be able to show how to calculate the following 2 predicted output examples with 3 outputs neurons each. Thanks

yt = [ [1,0,0],[0,0,1]]

yp = [ [0.9, 0.2,0.2], [0.2,0.8,0.3] ]

Kev
  • 1

1 Answers1

0

MSE or Mean Squared Error is simply the expected value of the squared difference between the predicted and the ground truth labels, represented as

\text{MSE}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)^2\right]

where theta is the ground truth labels and theta^hat is the predicted labels

I am not sure what are you referring to exactly, like a theoretical question or a part of code

As a Python implementation

def mean_squared_error(A, B):    
     return np.square(np.subtract(A,B)).mean()

yt = [[1,0,0],[0,0,1]] 
yp = [[0.9, 0.2,0.2], [0.2,0.8,0.3]]

mse = mean_squared_error(yt, yp) 
print(mse)

This will give a value of 0.21

If you are using one of the DL frameworks like TensorFlow, then they are already providing the function which calculates the mse loss between tensors

tf.losses.mean_squared_error

where

tf.losses.mean_squared_error(
    labels,
    predictions,
    weights=1.0,
    scope=None,
    loss_collection=tf.GraphKeys.LOSSES,
    reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)

Args:

labels: The ground truth output tensor, same dimensions as 'predictions'.

predictions: The predicted outputs.

weights: Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension).

scope: The scope for the operations performed in computing the loss.

loss_collection: collection to which the loss will be added.

reduction: Type of reduction to apply to loss.

Returns:

Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar.

Mostafa Hussein
  • 361
  • 4
  • 16