0

I am well known with that a “normal” neural network should use normalized input data so one variable does not have a bigger influence on the weights in the NN than others.

But what if you have a Qnetwork where your training data and test data can differ a lot and can change over time in a continous problem?

My idea was to just run a normal run without normalization of input data and then see the variance and mean from the input datas of the run and then use the variance and mean to normalize my input data of my next run. But what is the standard to do in this case?

Best regards Søren Koch

Søren Koch
  • 145
  • 1
  • 1
  • 10

1 Answers1

0

Normalizing the input can lead to faster convergence. It is highly recommended to normalize the inputs.

And as the network will progress through different layers due to use of non-linearities the data flowing between the different layers will not be normalized anymore and therefore, for faster convergence we often use batch normalization layers. Unit Gaussian data always helps in faster convergence and therefore make sure to keep it in unit Gaussian form as much as possible.

  • yeah but what if i have a good idea of which range of values my input variables that gets into my neural network but i am not sure because the purpose is that the controller should just stay somewhere and then calculate the rest of its life.. – Søren Koch Mar 13 '18 at 10:20
  • normalization of input values doesn't affect the quality of data but it helps your network to converge fast. The values of all the features in similar range yield better gradient updates for the network. I don't really understand your purpose of passing input values without normalization. I don't think this will make any difference except it will slow down the learning of your network. Still, if you really want to give it a shot I would suggest try to train both the networks simultaneously one with normalized inputs and one without and compare their accuracy/error and training time. – Aman Gokrani Mar 13 '18 at 23:11