0

Is it possible to create a custom activation function of the form:

def newactivation(x):
    if x <= -1:
        return -1
    elif x > -1 and x <= 1
        return x
    else :
        return 1

So basically it would be a linearized version of tanh(x).

Is it a problem during optimization, that the function has two non differentiable positions at -1 and 1?

How could I implement this?

kleka
  • 364
  • 3
  • 14
  • Should not be a problem I think. Even the `ReLu` is not differentiable at 0. Do check the source code implementation to see how they handle that case. – Anakin Apr 10 '19 at 10:59

1 Answers1

2

This is easy to implement with the clip function:

import keras.backend as K

def activation(x):
    return K.clip(x, -1.0, 1.0)

Since the gradient never explodes, it should not be an issue, this function has a shape similar to the ReLU.

Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140