I am attempting to build my first Autoencoder neural net using TensorFlow. The dimensions of the layers in the encoder and decoder are the same, just reversed. The autoencoder learns to compress and reconstruct image data to a reasonable standard, but I would like to try to improve its performance by instead having the decoder as the exact transpose of the encoder.
I am lost with how to do this in TensorFlow.
Here is a snippet of the construction of my network:
imgW, imgH = 28, 28
encoderDims = [
imgW * imgH,
(imgW // 2) * (imgH // 2),
(imgW // 3) * (imgH // 3),
(imgW // 4) * (imgH // 4)
]
decoderDims = list(reversed(encoderDims))
encoderWeights, encoderBiases = [], []
decoderWeights, decoderBiases = [], []
for layer in range(len(encoderDims) - 1):
encoderWeights.append(
tf.Variable(tf.random_normal([encoderDims[layer], encoderDims[layer + 1]]))
)
encoderBiases.append(
tf.Variable(tf.random_normal([encoderDims[layer + 1]]))
)
decoderWeights.append(
tf.Variable(tf.random_normal([decoderDims[layer], decoderDims[layer + 1]]))
)
decoderBiases.append(
tf.Variable(tf.random_normal([decoderDims[layer + 1]]))
)
input = tf.placeholder(tf.float32, [None, imgW * imgH])
encoded = input
for layer in range(len(encoderDims) - 1):
encoded = tf.add(tf.matmul(encoded, encoderWeights[layer]), encoderBiases[layer])
encoded = tf.nn.sigmoid(encoded)
decoded = encoded
for layer in range(len(decoderDims) - 1):
decoded = tf.add(tf.matmul(decoded, decoderWeights[layer]), decoderBiases[layer])
if layer != len(decoderDims) - 2:
decoded = tf.nn.sigmoid(decoded)
loss = tf.losses.mean_squared_error(labels=input, predictions=decoded)
train = tf.train.AdamOptimizer(learningRate).minimize(loss)
The two issues I do not know how to overcome are:
- How can I adjust only the encoder parameters during training with respect to the loss?
- How can I create the decoder weights and biases in such a way that after each iteration of training of the encoder parameters, they are set as the transpose of the newly adjusted encoder parameters?