8

In the TensorFlow Functional API guide, there's an example shown where multiple models are created using the same graph of layers. (https://www.tensorflow.org/beta/guide/keras/functional#using_the_same_graph_of_layers_to_define_multiple_models)

encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)

encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()

x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)

autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()

Is it possible to save and load these two models while still sharing the same graph? If I save and load them in the following way:

# Save
encoder.save('encoder.h5')
autoencoder.save('autoencoder.h5')

# Load
new_encoder = keras.models.load_model('encoder.h5')
new_autoencoder = keras.models.load_model('autoencoder.h5')

the new encoder and autoencoder will no longer share the same graph, and therefore no longer train together.

mpotma
  • 243
  • 1
  • 9

1 Answers1

3

That is a cool question. The encoder and autoencoder no longer share the same graph because they are being saved as disjoint models. In fact, encoder is being saved twice, as it is also embedded in autoencoder.

To restore both models while still sharing the same graph, I would suggest the following approach:

  1. Name the encoder's output layer. For example:

    encoder_output = layers.GlobalMaxPooling2D(name='encoder_output')(x)
    
  2. Save only the autoencoder:

    autoencoder.save('autoencoder.h5')
    
  3. Restore the autoencoder:

    new_autoencoder = keras.models.load_model('autoencoder.h5')
    
  4. Reconstruct the encoder's graph from the restored autoencoder so that they share the common layers:

    encoder_input = new_autoencoder.get_layer('img').input
    encoder_output = new_autoencoder.get_layer('encoder_output').output
    new_encoder = keras.Model(encoder_input, encoder_output)
    

Alternatively, you could also save/load the weights and reconstruct the graphs manually.

rvinas
  • 11,824
  • 36
  • 58