I have seen some learning with convolution neural network code. I do not understand the next part of this code.
loss = tf.reduce_sum(tf.nn.l2_loss(tf.subtract(train_output, train_gt)))
for w in weights:
loss += tf.nn.l2_loss(w)*1e-4
The first line is understandable. It compares the learned result with the label and then represents the square of the difference. And this is the definition of loss. But I do not understand the latter code: for w in weights:
!!
Here w
is a list of 10 weights and biases. So len(w)
is 20(w10 + b10)
. But why does this code calculate the square of w
and multiply it by 1e-4
to add to the loss?
Is it a necessary for the course of learning?