0

I can imagine two methods, but I don't know if there is a better one. The two methods that I imagine are:

  1. In the same session, after training the autoencoder, just build a new graph using the encoding subgraph of the autoencoder as the input
  2. After training the autoencoder, save the trained weights. This way, you don't have to train the autoencoder and the new other network in the same session. (kind of a variant of method 1)
Lay González
  • 2,901
  • 21
  • 41

1 Answers1

0

The easiest think for you to do is to run the encoder as usual (training mode) but not supplying the sess.run()-function with the optimizer (which affects the weights of the trained encoder. That way you can reuse the encoder without the need to construct a second graph or plus you have the advantage, that you can already reused during training!