This is tf 2.3.0. During training, reported values for SparseCategoricalCrossentropy loss and sparse_categorical_accuracy seemed way off. I looked through my code but couldn't spot any errors yet. Here's the code to reproduce:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
x = np.random.randint(0, 255, size=(64, 224, 224, 3)).astype('float32')
y = np.random.randint(0, 3, (64, 1)).astype('int32')
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(32)
def create_model():
input_layer = tf.keras.layers.Input(shape=(224, 224, 3), name='img_input')
x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255, name='rescale_1_over_255')(input_layer)
base_model = tf.keras.applications.ResNet50(input_tensor=x, weights='imagenet', include_top=False)
x = tf.keras.layers.GlobalAveragePooling2D(name='global_avg_pool_2d')(base_model.output)
output = Dense(3, activation='softmax', name='predictions')(x)
return tf.keras.models.Model(inputs=input_layer, outputs=output)
model = create_model()
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['sparse_categorical_accuracy']
)
model.fit(ds, steps_per_epoch=2, epochs=5)
This is what printed:
Epoch 1/5
2/2 [==============================] - 0s 91ms/step - loss: 1.5160 - sparse_categorical_accuracy: 0.2969
Epoch 2/5
2/2 [==============================] - 0s 85ms/step - loss: 0.0892 - sparse_categorical_accuracy: 1.0000
Epoch 3/5
2/2 [==============================] - 0s 84ms/step - loss: 0.0230 - sparse_categorical_accuracy: 1.0000
Epoch 4/5
2/2 [==============================] - 0s 82ms/step - loss: 0.0109 - sparse_categorical_accuracy: 1.0000
Epoch 5/5
2/2 [==============================] - 0s 82ms/step - loss: 0.0065 - sparse_categorical_accuracy: 1.0000
But if I double check with model.evaluate, and "manually" checking the accuracy:
model.evaluate(ds)
2/2 [==============================] - 0s 25ms/step - loss: 1.2681 - sparse_categorical_accuracy: 0.2188
[1.268101453781128, 0.21875]
y_pred = model.predict(ds)
y_pred = np.argmax(y_pred, axis=-1)
y_pred = y_pred.reshape(-1, 1)
np.sum(y == y_pred)/len(y)
0.21875
Result from model.evaluate(...) agrees on the metrics with "manual" checking. But if you stare at the loss/metrics from training, they look way off. It is rather hard to see whats wrong since no error or exception is ever thrown.
Additionally, i created a very simple case to try to reproduce this, but it actually is not reproducible here. Note that batch_size == length of data so this isnt mini-batch GD, but full batch GD (to eliminate confusion with mini-batch loss/metrics:
x = np.random.randn(1024, 1).astype('float32')
y = np.random.randint(0, 3, (1024, 1)).astype('int32')
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(1024)
model = Sequential()
model.add(Dense(3, activation='softmax'))
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['sparse_categorical_accuracy']
)
model.fit(ds, epochs=5)
model.evaluate(ds)
As mentioned in my comment, one suspect is batch norm layer, which I dont have for the case that can't reproduce.