0

If I want to train an autoencoder with tied weights (encoder and decoder has same weight parameters), how to use tf.layers.conv2d to do that correctly?

I cannot just simply share variables between corresponding conv2d layers of encoder and decoder, because the weights of decoder is the transpose of that of encoder.

Maybe tied weights are barely used nowadays, but I am just curious.

Maxim
  • 52,561
  • 27
  • 155
  • 209
Wei Liu
  • 1,004
  • 1
  • 10
  • 17
  • [Here](https://github.com/pkmital/tensorflow_tutorials/blob/master/python/09_convolutional_autoencoder.py) is a tutorial of a convolutional autoencoder using shared weights – yuji Jan 15 '18 at 01:38

1 Answers1

1

Use tf.nn.conv2d (and tf.nn.conv2d_transpose correspondingly). It's a low-level function that accepts the kernel variable as an argument.

kernel = tf.get_variable('kernel', [5, 5, 1, 32])
...
encoder_conv = tf.nn.conv2d(images, kernel, strides=[1, 1, 1, 1], padding='SAME')
...
decoder_conv = tf.nn.conv2d_transpose(images, kernel, strides=[1, 1, 1, 1], padding='SAME')
Maxim
  • 52,561
  • 27
  • 155
  • 209