I have used training data which is normalised and consist of n features. The number of training examples are m. I have implemented Deep Learning model Keras with the first layer as
model.add(layers.Dense(32,input_shape=(n,),activation='relu')
As my training data is normalised with mean 0 and std 1 should the neural network suffers from dying Relu problem as many data points have value less than 0 during training.
Should Relu be used in the first layer when the training data is normalised with mean 0 and std 1?