(Disclaimer: I'm using an old version of tensorflow (1.5.0), Python 3.6)
I'm trying to add an L1 regularizer to my (autoencoder) model, and I wish to do it directly into the layers. Following what I can find online and trying to apply it to my situation, I came up with this:
l1_reg = slim.l1_regularizer(scale=0.01)
spatial_conv = lambda score, layer_id: slim.conv2d(
score,
self.n_skernels[layer_id],
[self.s_kernelsize, 1],
[self.s_stride, 1],
weights_regularizer=(l1_reg if (layer_id == self.nlayers - 1) else None),
scope=f'Spatial{layer_id}')
This is a generic layer that I then create in a for loop later, and the important part for my question is simply the weights_regularizer parameter. Now, I'm struggling to make sure that the L1 regularizer is actually added to the layer, i.e. either by being able to visualize the layer information (unfortunately, I cannot find a function that shows every info, I usually end up with the shape and the type (Conv2D here)) or by being able to visualize the regularizer loss. I tried using
print(f'Regularization list: {tf.losses.get_regularization_losses()}')
print(f'Regularization loss {tf.losses.get_regularization_loss()}')
But the list is empty and the loss is a 0 constant... As the code is a using old libraries, I'm not able to find more documentation on the issue at hand. Am I missing a function I need to call in order for the regularizer in the layer to be taken into account? Also, is there a way to print the layer with all the information about it, to check if adding a regularizer this way actually does something?