0

I need to remove the near-zero weights of the Neural network so that the distribution of parameters is far away from the zero point. The distribution of weights after removing nearzero weights and weight-scaling

I met the problem from this paper: https://ieeexplore.ieee.org/document/7544366

I wonder how can I achieve this in my PyTorch/TensorFlow program, such as use a customized activation layer? Or Define a loss function that punishes the near-zero weight?

Thank you if you can provide any help.

1 Answers1

0

You're looking for L1 regularization, read the docs.

import tensorflow as tf

tf.keras.layers.Dense(units=128,
                      kernel_regularizer=tf.keras.regularizers.L1(.1))

Smaller coefficients will be turned to zero.

Nicolas Gervais
  • 33,817
  • 13
  • 115
  • 143
  • Hi, thanks for your reply. However, I want to achieve the exact opposite situation, all parameters are far away from zero points. I think no matter how I adjust the coefficient of the L1 norm, the larger parameters still have a larger penalty than the smaller parameters. But I want smaller parameters to have a larger penalty. Oh, so can I adjust the L1/L2 norm coefficient to negative? – ANewBee666 Apr 09 '21 at 16:22