I am training a complex neural network architecture where I use a RNN for encoding my inputs then, A deep neural network with a softmax output layer.
I am now optimizing my architecture deep neural network part (number of units and number of hidden layers).
I am currently using sigmoid activation for all the layers. This seems to be ok for few hidden layer but as the number of layers grow, it seems that sigmoid is not the best choice.
Do you think I should do hyper-parameter optimization for sigmoid first then ReLu or, it is better to just use ReLu directly ?
Also, do you think that having Relu in the first hidden layers and sigmoid only in the last hidden layer makes sense given that I have a softmax output.