0

Tanh activation functions bounds the output to [-1,1]. I wonder how does it work, if the input (features & Target Class) is given in 1-hot-Encoded form ?

How keras (is managing internally) the negative output of activation function to compare them with the class labels (which are in one-hot-encoded form) -- means only 0's and 1's (no "-"ive values)

Thanks!

Akhan
  • 425
  • 1
  • 7
  • 21

1 Answers1

0

First of all you simply should'nt use them in your output layer. Depending on your loss function you may even get an error. A loss function like mse should be able to take the ouput of tanh, but it won't make much sense.

But if were talking about hidden layers you're perfectly fine. Also keep in mind, that there are biases which can train an offset to the layer before giving the ouput of the layer to the activation function.

dennis-w
  • 2,166
  • 1
  • 13
  • 23
  • Thanks! I am talking about hidden layers. (just did the edit in question). I want to know how it does the working internally to compare its output with "One-Hot encoded - TargetClass" – Akhan Mar 09 '18 at 12:55
  • I deleted my previous comment, because I still don't understand the nature of your question. Inside a Neural Network neurons can have negative values, which is no problem at all. Activation functions for output layers like sigmoid or softmax maps every possible neuron value to [0,1] so you're good to go. – dennis-w Mar 09 '18 at 13:08
  • 1
    ah ok, I guess this clears things more. Even if my hidden layer has activation function "tanh" resulting in negative values.. the Softmax will in the output layer will turn it to [0,1]. thanks. – Akhan Mar 09 '18 at 13:27