0

currently, I am trying to prune a network (simple convolutional autoencoder) - needless to say without success.

First of all, the source I am referring to is blog post:

http://machinethink.net/blog/compressing-deep-neural-nets/

In a first step, I just want to set a convolutional filter to zero, so that it does not contribute anymore. What I've tried so far is:

weights = model.get_weights() # weights is a numpy array
weights[2][0] = 0 # e.g. set the first filter of the second Conv2D-layer 0
weights[3][0] = 0 # and the filter's bias, of course

Afterwards, I initialize a new model with those weights:

newmodel.set_weights(weights)

and call:

newmodel.fit()

However, after retrieving the weights:

newweights = newmodel.get_weights()

newweights[2][0] 

is not 0 anymore.

This means that my filter was updated, although, according to the article, it should not happen.

Can anyone tell me what I am doing wrong (Or how to do it correctly)?

Best,

Arepo

Arepo
  • 1
  • Your answer is sort of here. You just need to change the mask and get rid of unnecessary asserts: https://stackoverflow.com/questions/50513505/how-to-manipulate-constrain-the-weights-of-the-filter-kernel-in-conv2d-in-kera#50514341 – Daniel Möller Jul 30 '18 at 18:18
  • So you mean to take the part kernel_mask = np.ones(self.kernel_size + (1, 1)) and change it to kernel_mask = np.zeros(self.kernel_size + (1, 1)) effectively? (And then, of course, a bunch of the code becomes unnecessary...) – Arepo Jul 30 '18 at 21:41
  • And if I'm not mistaken, with this, I would generate a whole layer with zero weights and not just "turn off" one filter - would it mean that I had to split the layer in question into two layers and concatenate them? – Arepo Jul 30 '18 at 21:51

0 Answers0