I made small model using Keras on Google colaboratory. And I see wrong metrics when I run learning on TPU.
When I run learning on CPU/GPU, of course, m1 and m2 metrics shows correct number. (see below code)
But after I change runtime type to TPU, m1 and m2 is not correct and it looks like average of these values.
def m1(y_true, y_pred):
return K.constant(10)
def m2(y_true, y_pred):
return K.constant(20)
model = AnyModel()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=[m1, m2])
model.fit(...)
[result of CPU/GPU]
[=====>....] - ETA: 0s - loss: xxxxx - m1: 10.0000 - m2: 20.0000
[result of TPU]
[=====>....] - ETA: 0s - loss: xxxxx - m1: 14.9989 - m2: 15.0000
It is obvious that the result of CPU/GPU is correct. Why is this happened? Is there any workaround?
- If I use only one metrics (like [m1]), the value is correct.