I'm using feed forward, gradient descent, backpropagation neural networks where hidden/output neurons are using tanh activation function and input neurons are linear.
What is the best way, in your opinion, for normalizing numerical data if:
Maximum number is known and for example maximum positive number would be 1000 and maximum negative -1000.
Maximum number is unknown.
And if I should keep the maximum numbers same for all inputs or would it be okay if network's inputs have different normalizing way?
Thanks!