7

I'm try to code Elastic-Net. It's look likes:

elsticnet formula

And I want to use this loss function into Keras:

def nn_weather_model():
    ip_weather = Input(shape = (30, 38, 5))
    x_weather = BatchNormalization(name='weather1')(ip_weather)
    x_weather = Flatten()(x_weather)
    Dense100_1 = Dense(100, activation='relu', name='weather2')(x_weather)
    Dense100_2 = Dense(100, activation='relu', name='weather3')(Dense100_1)
    Dense18 = Dense(18, activation='linear', name='weather5')(Dense100_2)
    model_weather = Model(inputs=[ip_weather], outputs=[Dense18])
    model = model_weather
    ip = ip_weather
    op = Dense18
    return model, ip, op

my loss function is:

def cost_function(y_true, y_pred):
        return ((K.mean(K.square(y_pred - y_true)))+L1+L2)
   return cost_function

It's mse+L1+L2

and L1 and L2 is

weight1=model.layers[3].get_weights()[0]
weight2=model.layers[4].get_weights()[0]
weight3=model.layers[5].get_weights()[0]
L1 = Calculate_L1(weight1,weight2,weight3)
L2 = Calculate_L2(weight1,weight2,weight3)

I use Calculate_L1 function to sum of the weight of dense1 & dense2 & dense3 and Calculate_L2 do it again.

When I train RB_model.compile(loss = cost_function(),optimizer= 'RMSprop') the L1 and L2 variable didn't update every batch. So I try to use callback when batch_begin while using:

class update_L1L2weight(Callback):
    def __init__(self):
        super(update_L1L2weight, self).__init__()
    def on_batch_begin(self,batch,logs=None):
        weight1=model.layers[3].get_weights()[0]
        weight2=model.layers[4].get_weights()[0]
        weight3=model.layers[5].get_weights()[0]
        L1 = Calculate_L1(weight1,weight2,weight3)
        L2 = Calculate_L2(weight1,weight2,weight3)

How could I use callback in the batch_begin calculate L1 and L2 done, and pass L1,L2 variable into loss funtion?

today
  • 32,602
  • 8
  • 95
  • 115
陳建勤
  • 127
  • 6
  • Why don't you directly use a weight regularizer on the layers? – today Aug 02 '19 at 04:26
  • hi, I had find keras had layer's L1 and L2, but I think it not same with the formula. The formula present the cost function is MSE+L1+L2 , but keras layer' L1 and L2 is just for layer , and every layer had single L1,L2 , I'm not sure this way is same with cost function+L1+L2, any helpful information ? – 陳建勤 Aug 02 '19 at 06:05
  • If you set the regularizers on layers, then they will be added to whatever loss you specify. – today Aug 02 '19 at 06:11
  • thanks for your comments , it there way to find the how keras calculate when layer used L1,L2 ? cheers. – 陳建勤 Aug 02 '19 at 06:14
  • I find formula in here, if anybody want to see , https://github.com/keras-team/keras/blob/master/keras/regularizers.py – 陳建勤 Aug 02 '19 at 06:49

1 Answers1

4

You can simply use built-in weight regularization in Keras for each layer. To do that you can use kernel_regularizer parameter of the layer and specify a regularizer for that. For example:

from keras import regularizers

model.add(Dense(..., kernel_regularizer=regularizers.l2(0.1)))

Those regularizations would create a loss tensor which would be added to the loss function, as implemented in Keras source code:

# Add regularization penalties
# and other layer-specific losses.
for loss_tensor in self.losses:
    total_loss += loss_tensor
today
  • 32,602
  • 8
  • 95
  • 115