1

I am training NN for the regression problem. So the output layer has a linear activation function. NN output is supposed to be between -20 to 30. My NN is performing good most of the time. However, sometimes it gives output more than 30 which is not desirable for my system. So does anyone know any activation function that can provide such kind of restriction on output or any suggestions on modifying linear activation function for my application?

I am using Keras with tenserflow backend for this application

jd95
  • 404
  • 6
  • 14

2 Answers2

3

What you can do is to activate your last layer with a sigmoid, the result will be between 0 and 1 and then create a custom layer in order to get the desired range :

def get_range(input, maxx, minn):
    return (minn - maxx) * ((input - K.min(input, axis=1))/ (K.max(input, axis=1)*K.min(input, axis=1))) + maxx

and then add this to your network :

out = layers.Lambda(get_range, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)

The output will be normalized between 'maxx' and 'minn'.

UPDATE

If you want to clip your data without normalizing all your outputs, do this instead :

def clip(input, maxx, minn):
    return K.clip(input, minn, maxx)

out = layers.Lambda(clip, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)
Community
  • 1
  • 1
Thibault Bacqueyrisses
  • 2,281
  • 1
  • 6
  • 18
  • Thanks for the quick answer! However I still have one doubt, If I will scale my output according to min and max value, then most of the output data will be changed according to scalling. However, if my model output is between -20 to 30, then its perfact matching, there is no error and scalling them would add some error:( So, I want to scall my output to min or max value only my output is out of the given range. – jd95 May 30 '19 at 21:22
  • 1
    I see, but I think if everything is right, your model will learn a new way to get the correct answers but this time without outliers. But if you really need to clip, i will update my answer ! – Thibault Bacqueyrisses May 30 '19 at 21:43
  • Thanks for your update. Clipping with Lambda layers worked:) – jd95 May 30 '19 at 22:53
2

What you should do is normalize your target outputs to the range [-1, 1] or [0, 1], and then use a tanh (for [-1, 1]) or sigmoid (for [0, 1]) activation at the output, and train the model with normalize data.

Then you can denormalize the predictions to get values in your original ranges during inference.

Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140
  • Normalizing the input will not guarantee that the output is within the same range (regarding your statement "train the model with normalize data"). – Markus May 30 '19 at 21:14
  • @Markus I said to normalize the output, not the input. – Dr. Snoopy May 30 '19 at 21:15
  • @MatiasValdenegro I want to scale/change my output value only if its outside the range. If output is in the range, then its perfect prediction for my application. For example, If my output is between 1 to 50. So If I scale these value to be between 1 to 30, than all output between (1 to 30) will also be scalled and will have smaller value than the actual prediction. Thats not what I want. I want to scale any output value more than 30 to 30. and any output value less than -20 to -20. any value between -20 to 30 to remain as it is. – jd95 May 30 '19 at 21:36
  • 1
    @JD95 That is not scaling, that is clamping, but if you normalize your data, no values outside of that range will be produced in normalized values. – Dr. Snoopy May 30 '19 at 21:40