0

When we convert a tf.keras model with PReLU with tf 1.15, the PReLU layers becomes ReLU and seem to get fused with previous operators. As a result, the keras h5 file of 28 MB becomes 1.3 MB in size.It looks like number of parameters gets significantly less since i did not use share weights axes option with PReLU. So, does this conversion work properly without any accuracy loss? Are the weights of PReLU discarded altogether? Similarly does the fusion take into account the bias of transpose convolution layers(bias is not mentioned as input property in netron). Do these fusions preserve the trained weight parameters internally and do they effect the inference accuracy of tflite?

Prelu Fusion:-

input = Input(shape=(512,512,3), name='ip')
x = Conv2D(filters=8, kernel_size=2, strides=2, padding='valid')(input)
x = PReLU()(x) # shared_axes not used

enter image description here

It shows prelu/ReLU in output property

Transpose conv:-

cout1 = Conv2DTranspose(filters=8, kernel_size=2, strides=2, padding = 'same' )(pout1) # Bias is true by default

enter image description here

It does not show bias in output property

So, does the fusion work properly by combining weights or are they being discarded?

anilsathyan7
  • 1,423
  • 17
  • 25

1 Answers1

0

If all the values in the weights are zeros it automatically discards them during fusion/conversion. So, PReLU became ReLU after fusion and transpose conv+bias became transpose conv. The problem arises when you convert a model to tflite format before training, since the weights have their default values(zeros).

anilsathyan7
  • 1,423
  • 17
  • 25