0

I made small model using Keras on Google colaboratory. And I see wrong metrics when I run learning on TPU.

When I run learning on CPU/GPU, of course, m1 and m2 metrics shows correct number. (see below code)

But after I change runtime type to TPU, m1 and m2 is not correct and it looks like average of these values.

def m1(y_true, y_pred):
    return K.constant(10)

def m2(y_true, y_pred):
    return K.constant(20)

model = AnyModel()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=[m1, m2])
model.fit(...)

[result of CPU/GPU]

[=====>....] - ETA: 0s - loss: xxxxx - m1: 10.0000 - m2: 20.0000 

[result of TPU]

[=====>....] - ETA: 0s - loss: xxxxx - m1: 14.9989 - m2: 15.0000 

It is obvious that the result of CPU/GPU is correct. Why is this happened? Is there any workaround?

  • If I use only one metrics (like [m1]), the value is correct.
Bob Smith
  • 36,107
  • 11
  • 98
  • 91
mercy387
  • 1
  • 2

1 Answers1

0

Now, it works!

Multiple metrics can be used correctly with Tensorflow version 1.14.0-rc1. I guess it was a bug of tf or keras, but now it has been solved.

(Notice: On ver.1.14.0-rc1, fit_generator cannot be used! But it should be solved soon.)

If you use tensorflow 1.13 or less for some reason, mind this bug and you can use just one metric.

mercy387
  • 1
  • 2