0

I am trying to validate the output of the first layer of my network build using standard Keras. The name of the first layer is conv2d.

I built a new Model just to get the output of the first layer, using the following code:

inter_layer = None
weights = None
biases = None

for layer in qmodel.layers:
    if layer.name == "conv2d":
        print("Found layer: " + layer.name)
        inter_layer = layer
        weights = layer.get_weights()[0]
        biases = layer.get_weights()[1]

inter_model = Model(qmodel.input,inter_layer.output)

inter_model.compile()

Then, I did the following (img_test is one of the cifar10 images):

first_layer_output = inter_model.predict(img_test)

# Get the 3x3 pixel upper left patch of the 3 channels of the input image
img_test_slice = img_test[0,:3,:3,:]
# Get only the first filter of the layer
weigths_slice = weights[:,:,:,0]
# Get the bias of the first filter of the layer
bias_slice = biases[0]
# Get the 3x3 pixel upper left patch of the first channel of the output of the layer
output_slice = first_layer_output[0,:3,:3,0]

I printed the shape of each slice, and got the correct shapes:

  • img_test_slice: (3,3,3)
  • weigths_slice: (3,3,3)
  • output_slice: (3,3)

As far as I understand, if I make this:

partial_sum = np.multiply(img_test_slice,weigths_slice)
output_pixel = partial_sum.sum() + bias_slice

output_pixel shoul be one of the values of output_slice (the value in index [1,1] actually, because the layer has padding = 'SAME').

But.... it is not.

Perhaps I am missing something very simple about how the calculation of the convolution works, but as far as I understand, doing the elementwise multiplication and then doing the sum of all values plus the bias should be one of the output pixels of the layer.

Perhaps the output data of the layer is arranged in a different manner than the input of the layer?

fPecc
  • 115
  • 7

1 Answers1

0

The problem was the use of the get_weights method.

My model was using the QKeras layers, and when you use this layers, you shouldn't use get_weights to get the layer weights, but insted do something like:

for quantizer, weight in zip(layer.get_quantizers(), layer.get_weights()):
    if quantizer:
        weight = tf.constant(weight)
        weight = tf.keras.backend.eval(quantizer(weight))

If you extract the weights using this for loop, you get the real quantized weights, so now the calculations are correct.

fPecc
  • 115
  • 7