30

I'm using tensorflow2.4, and new to tensorflow

Here's the code

model = Sequential()
model.add(LSTM(32, input_shape=(X_train.shape[1:])))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))

model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics='mae')
model.summary()

save_weights_at = 'basic_lstm_model'
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
                        save_best_only=True, save_weights_only=False, mode='min',
                        period=1)
history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
         verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
         shuffle=True)

And in some epochs, got this warning: enter image description here

Do you know why did I get this warning?

Cherry Wu
  • 3,844
  • 9
  • 43
  • 63
  • Have the same issue, also using LSTM layers. Did you get it solved? – cyclux Feb 08 '21 at 23:49
  • not yet~ didn't impact output in my case though... – Cherry Wu Feb 09 '21 at 06:35
  • 2
    @Cherry Wu, I tried executing without `ModelCheckpoint` and it is not showing any warning. It seems this is an open issue in `TF 2.4` and it can be tracked [Saving model in TF 2.4](https://github.com/tensorflow/tensorflow/issues/47479). Thanks! –  Mar 12 '21 at 13:02
  • Thank you for letting me know @TFer2! Yeah, in my case have to use `ModelCheckpoint` – Cherry Wu Mar 12 '21 at 17:01

6 Answers6

19

I think this warning can be safely ignored as you can find the same warning even in a tutorial given by tensorflow. I often see this warning when saving custom models such as graph NNs. You should be good to go as long as you don't want to access those non-callable functions.

However, if you're annoyed by this big chunk of text, you can suppress this warning by adding the following at the top of the code.

import absl.logging
absl.logging.set_verbosity(absl.logging.ERROR)
Achintha Ihalage
  • 2,310
  • 4
  • 20
  • 33
  • what if you do want to access those fns? – Alex Kreimer Feb 17 '22 at 10:18
  • 3
    What actually means "access" to these functions? does it mean being able to arbitrarily call them from anywhere, or that the saved network uses it and therefore won't be able to run at all? – Patafikss Jul 22 '22 at 12:26
  • 1
    This answer does not contain a solution to the problem. The warning may mean that the saved model cannot be reloaded in the future like @FlorianLalande explained - his answer should be upvoted. – mac13k Oct 16 '22 at 19:33
  • 1
    @mac13k His answer is not providing a solution either. – Scholar Nov 28 '22 at 14:03
  • @Scholar- fair point, but at least the explanation is quite thorough. – mac13k Dec 01 '22 at 11:37
11

If you ignore this warning, you will not be able to reload your model.

What this warning is telling you is that you are using customized layers and losses in your model architecture.

The ModelCheckpoint callback saves your model after each epoch with a lower validation loss achieved. Models can be saved in HDF5 format or SavedModel format (default, specific to TensorFlow and Keras). You are using the SavedModel format here, as you didn't explicitly specify the .h5 extension.

Every time a new epoch reaches a lower validation loss, you model is automatically saved, but your customized objects (layers and losses) are not traced. Btw, this is why the warning is only prompted after several training epochs.

Without your traced customized objects, you will not be able to reload your model successfully with keras.models.load_model().

If you do not intent to reload your best model in the future, you can safely ignore this warning. In any case, you can still use your best model in your current local environment after training.

Florian Lalande
  • 494
  • 4
  • 13
9

saving models in H5 format seems to work for me.

model.save(filepath, save_format="h5")

Here is how to use H5 with model checkpointing (I've not tested this extensively, caveat emptor!)

from tensorflow.keras.callbacks import ModelCheckpoint

class ModelCheckpointH5(ModelCheckpoint):
    # There is a bug saving models in TF 2.4
    # https://github.com/tensorflow/tensorflow/issues/47479
    # This forces the h5 format for saving
    def __init__(self,
               filepath,
               monitor='val_loss',
               verbose=0,
               save_best_only=False,
               save_weights_only=False,
               mode='auto',
               save_freq='epoch',
               options=None,
               **kwargs):
        super(ModelCheckpointH5, self).__init__(filepath,
               monitor='val_loss',
               verbose=0,
               save_best_only=False,
               save_weights_only=False,
               mode='auto',
               save_freq='epoch',
               options=None,
               **kwargs)
    def _save_model(self, epoch, logs):
        from tensorflow.python.keras.utils import tf_utils
   
        logs = logs or {}

        if isinstance(self.save_freq,
                      int) or self.epochs_since_last_save >= self.period:
          # Block only when saving interval is reached.
          logs = tf_utils.to_numpy_or_python_type(logs)
          self.epochs_since_last_save = 0
          filepath = self._get_file_path(epoch, logs)

          try:
            if self.save_best_only:
              current = logs.get(self.monitor)
              if current is None:
                logging.warning('Can save best model only with %s available, '
                                'skipping.', self.monitor)
              else:
                if self.monitor_op(current, self.best):
                  if self.verbose > 0:
                    print('\nEpoch %05d: %s improved from %0.5f to %0.5f,'
                          ' saving model to %s' % (epoch + 1, self.monitor,
                                                   self.best, current, filepath))
                  self.best = current
                  if self.save_weights_only:
                    self.model.save_weights(
                        filepath, overwrite=True, options=self._options)
                  else:
                    self.model.save(filepath, overwrite=True, options=self._options,save_format="h5") # NK edited here
                else:
                  if self.verbose > 0:
                    print('\nEpoch %05d: %s did not improve from %0.5f' %
                          (epoch + 1, self.monitor, self.best))
            else:
              if self.verbose > 0:
                print('\nEpoch %05d: saving model to %s' % (epoch + 1, filepath))
              if self.save_weights_only:
                self.model.save_weights(
                    filepath, overwrite=True, options=self._options)
              else:
                self.model.save(filepath, overwrite=True, options=self._options,save_format="h5") # NK edited here

            self._maybe_remove_file()
          except IOError as e:
            # `e.errno` appears to be `None` so checking the content of `e.args[0]`.
            if 'is a directory' in six.ensure_str(e.args[0]).lower():
              raise IOError('Please specify a non-directory filepath for '
                            'ModelCheckpoint. Filepath used is an existing '
                            'directory: {}'.format(filepath))
            # Re-throw the error for any other causes.
            raise 
Noel Kennedy
  • 12,128
  • 3
  • 40
  • 57
3

Try appending the extension to the file.

save_weights_at = 'basic_lstm_model'

For:

save_weights_at = 'basic_lstm_model.h5'
jcollado
  • 39
  • 1
0

I think @Florian Lalande is right and this warning should not be ignored. Here is my solution that not sure can work.

tf.saved_model.save(model, save_model_path) # save as savemodel form
test_model = tf.keras.models.load_model(save_model_path, custom_objects={"TFBertModel": transformers.TFBertModel}) # load model and point out the custom_objects

The idea is to use tf.keras.models.load_model with custom_objects.

-1

save the model in .tf format if there is issue in saving the model

model.save('you_model_name.tf')
tikendraw
  • 451
  • 3
  • 12