I'm setting up a custom training environment in tensorflow-keras, and I want to know if is possible to reconnect shared weights of models that were saved onto different files.
I have an attention encoder-decoder model, and as is known, the training model of attention is a little bit different from the predict model, but those models are sharing the same weights. First i save a non-trained model into 3 files with tf.keras.models.save_model:
- Full Model ( Training Model )
- Encoder Model ( Prediction Model )
- Decoder Model ( Prediction Model )
and then, I try to use tf.keras.models.load_model to load the three above models and train only the full model as usual.
full_model = tf.keras.models.load_model(
'full_model.h5'),
custom_objects={'AttentionLayer': AttentionLayer}
)
encoder_model = tf.keras.models.load_model(
os.path.join('encoder.h5'),
custom_objects={'AttentionLayer': AttentionLayer}
)
decoder_model = tf.keras.models.load_model(
os.path.join('decoder.h5'),
custom_objects={'AttentionLayer': AttentionLayer}
)
full_model.train()...
So, the full_model weights are updating as expected... but, the encoder and decoder weights are still freezing. Is there some way to reconnect the graph of those models?