2

I found there are different ways to save/restore models and variables in Tensorflow. These ways including:

In tensorflow's documentations, I found some differences between them:

  1. tf.saved_model is a thin wrapper around tf.train.Saver
  2. tf.train.Checkpoint support eager execution but tf.train.Saver not.
  3. tf.train.Checkpoint not creating .meta file but still can load graph structure (here is a big question! how it can do that?)

How tf.train.Checkpoint can load graph without .meta file? or more generally What is the difference between tf.train.Saver and tf.train.Checkpoint?

Amir
  • 16,067
  • 10
  • 80
  • 119

1 Answers1

0

According to Tensorflow docs:

Checkpoint.save and Checkpoint.restore write and read object-based checkpoints, in contrast to tf.train.Saver which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefer tf.train.Checkpoint over tf.train.Saver for new code.

Amir
  • 16,067
  • 10
  • 80
  • 119