0

I've been reading up on autoencoders and all the examples I see mirror the encoder portion when building the decoder.

encoder = [128, 64, 32, 16, 3]
decoder = [3, 16, 32, 64, 128]

Is this just by convention?

Is there any specific reason the decoder should not have a different hidden layer structure than the encoder. For example...

encoder = [128, 64, 32, 16, 3]
decoder = [3, 8, 96, 128]

so long as the inputs and outputs match.

maybe I'm missing something obvious.

P-Rod
  • 471
  • 1
  • 5
  • 18

1 Answers1

0

It's just a convention:

The architecture of a stacked autoencoder is typically symmetrical with regards to the central hidden layer (the coding layer). (c) Hands-On Machine Learning with Scikit-Learn and TensorFlow

In your case coding layer is layer with size=3, so stacked autoencoder has shape: 128, 64, 32, 16, 3, 16, 32, 64, 128.