It can happen in multiple ways, the limits is setting at checkpoint manager with callback you also applied the same.!
Input:
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit(batched_features, epochs=1000 ,validation_data=(batched_features), callbacks=[cp_callback])
checkpoint = tf.train.Checkpoint(model)
manager = tf.train.CheckpointManager( checkpoint, checkpoint_dir, max_to_keep=3 )
checkpoint.restore(checkpoint_dir)
Output:
Epoch 6/1000
1/1 [==============================] - ETA: 0s - loss: 1.6910e-05 - accuracy: 1.0000
Epoch 6: val_loss improved from 0.00002 to 0.00000, saving model to F:\models\checkpoint\test_checkpoint_restore_01
Epoch 628/1000
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.0000e+00
Epoch 628: val_loss did not improve from 0.00000
1/1 [==============================] - 0s 160ms/step - loss: 0.0000e+00 - accuracy: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 629/1000
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.0000e+00
Epoch 629: val_loss did not improve from 0.00000
1/1 [==============================] - 0s 169ms/step - loss: 0.0000e+00 - accuracy: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 630/1000
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.0000e+00
Epoch 630: val_loss did not improve from 0.00000
1/1 [==============================] - 0s 162ms/step - loss: 0.0000e+00 - accuracy: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 631/1000
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.0000e+00
Epoch 631: val_loss did not improve from 0.00000
1/1 [==============================] - 0s 156ms/step - loss: 0.0000e+00 - accuracy: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Target directory