0

According to: Why is the decoder in an autoencoder uses a sigmoid on the last layer?

  • The last layer activation function contains sigmoid in order to the output to be in range [0, 1].
  • If the input to the autoencoder is normalized (each pixel between [0..1]), Can we change the activation function of the last layer from sigmoid to be something else ?
  • Can we use no activation function at all ?
user3668129
  • 4,318
  • 6
  • 45
  • 87
  • It depends on what kind of output do you want since even though it's "autoencoder" you don't really need to have the input and the output to be the same. – Natthaphon Hongcharoen Jul 01 '21 at 15:39
  • But if you want to have 0-1 output then yes, better use sigmoid. You'll have a very hard time training them to be in such tiny range out of everything `float32` can be if you don't use any activation. – Natthaphon Hongcharoen Jul 01 '21 at 15:41
  • But on the other hand if you want output to be integer then you don't really need activation function. Maybe `relu` will do something useful if you want to have only positive output but that's about it. – Natthaphon Hongcharoen Jul 01 '21 at 15:45
  • I’m voting to close this question because it is not about programming as defined in the [help] but about ML theory and/or methodology - please see the intro and NOTE in the `machine-learning` [tag info](https://stackoverflow.com/tags/machine-learning/info). – desertnaut Jul 01 '21 at 16:45

0 Answers0