1

I was wondering how one can implement l1 or l2 regularization within an LSTM in TensorFlow? TF doesn't give you access to the internal weights of the LSTM, so I'm not certain how one can calculate the norms and add it to the loss. My loss function is just RMS for now.

The answers here don't seem to suffice.

OmG
  • 18,337
  • 10
  • 57
  • 90
Shamak Dutta
  • 11
  • 1
  • 2
  • Possible duplicate of [Regularization for LSTM in tensorflow](https://stackoverflow.com/questions/37571514/regularization-for-lstm-in-tensorflow) – BiBi Dec 16 '18 at 15:35

2 Answers2

1

The answers in the link you mentioned are the correct way to do it. Iterate through tf.trainable_variables and find the variables associated with your LSTM.

An alternative, more complicated and possibly more brittle approach is to re-enter the LSTM's variable_scope, set reuse_variables=True, and call get_variable(). But really, the original solution is faster and less brittle.

Eugene Brevdo
  • 899
  • 7
  • 8
1

TL;DR; Save all the parameters in a list, and add their L^n norm to the objective function before making gradient for optimisation

1) In the function where you define the inference

net = [v for v in tf.trainable_variables()]
return *, net

2) Add the L^n norm in the cost and calculate the gradient from the cost

weight_reg = tf.add_n([0.001 * tf.nn.l2_loss(var) for var in net]) #L2

cost = Your original objective w/o regulariser + weight_reg

param_gradients = tf.gradients(cost, net)

optimiser = tf.train.AdamOptimizer(0.001).apply_gradients(zip(param_gradients, net))

3) Run the optimiser when you want via

_ = sess.run(optimiser, feed_dict={input_var: data})
Kristofersen
  • 2,736
  • 1
  • 15
  • 31
sdr2002
  • 542
  • 2
  • 6
  • 16