I am trying to reproduce the EuclideanLoss
from Caffe
in Tensorflow
. I found a function called: tf.nn.l2_loss
which according to the documents computes the following:
output = sum(t ** 2) / 2
When looking at the EuclideanLoss in the Python version of caffe it says:
def forward(self, bottom, top):
self.diff[...] = bottom[0].data - bottom[1].data
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
In the original docu it says:
To me this is exactly the same computation. However, my loss values for the same net in Tensorflow are around 3000 and in Caffe they are at roughly 300. So where is the difference?