0

I used Autoencoder for pre-training the data, for which I normalize the input data and pass into Autoencoder. As a result autoencoder will end up in reducing the number of features.

Now I want to use the output of autoencoder for a prediction task. For which, I want to pass the output of autoencoder into a fully feed network.

My question is do I need to Normalize the data again before passing the into fully feed network?

Vinod Prime
  • 371
  • 1
  • 3
  • 13

1 Answers1

1

Normally not, e.g., due to regularisation. The output of the hidden layer should be centred and normalised. However if you look at the auto encoder formulation argmin_{f,g} ( X- f(g(X)). There is nothing which keeps the auto encoder from learning denormalized data.

So what can you do?

  • Check your training data whether it is already normalised in the hidden layer
  • Normalize the data anyhow. It doesn't harm as it is a pretty cheap operation
CAFEBABE
  • 3,983
  • 1
  • 19
  • 38
  • the output of hidden layer is not normalized, I checked it – Vinod Prime Jan 09 '16 at 13:16
  • Just to be sure: It is not necessary that they are `[0,1]` normalised or something specific. They should only be in the same order of magnitude. Which algorithm are you using for training the auto encoder? Instead of normalising the output of the hidden layer, you can also consider to alter the weight accordingly. – CAFEBABE Jan 09 '16 at 14:54
  • what do you mean by magnitude ? – Vinod Prime Jan 09 '16 at 15:04
  • They are all in the the same range. Basically if you do a histogram for all hidden units it shouldn't be the case that one hidden units output is in a [0,1] range and another hidden units output is in a [0,100] range – CAFEBABE Jan 09 '16 at 15:08
  • I use VAE and totally I have 64 latent variables. I checked the min and max of each hidden variable and it is different. – Vinod Prime Jan 09 '16 at 15:31