0

I have a toy example of CNN mnist digit keras tensorflow model , I have quantized the 2 Conv2D layers and 2 dense layers in it to 4-bit , now I want to access the weights of Conv2D layer. But if I try that using get_weights() it returns a list of np.array which is float 32bit , this can be because by default numpy stores weight in float 32, To access the weights of a QAT model we need to convert it to tf.lite , which I cant do in my case because of the annotated layers (4-bit quantized). Any suggestion on how can I properly access those 4-bit quantized weights ? this is my quantized model :

QAT_model = tfmot.quantization.keras.quantize_annotate_model( keras.Sequential([
    tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Conv2D(6, activation='relu',kernel_size=(5, 5), input_shape= input_shape), DefaultDenseQuantizeConfig() ),
    tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
    tf.keras.layers.Dropout(0.25),
    tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Conv2D(10, activation='relu',kernel_size=(5, 5)), DefaultDenseQuantizeConfig() ),
    tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
    tf.keras.layers.Dropout(0.25),
    tf.keras.layers.Flatten(),
    tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Dense(256, activation='softmax'), DefaultDenseQuantizeConfig() ),
    tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Dense(10, activation='softmax'), DefaultDenseQuantizeConfig() )

]) )

with tfmot.quantization.keras.quantize_scope(
  {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig}):
  # Use `quantize_apply` to actually make the model quantization aware.
  quantized_model = tfmot.quantization.keras.quantize_apply(QAT_model)

quantized_model.summary()
quantized_model.compile(optimizer='adam',  # Good default optimizer to start with
              loss= tensorflow.keras.losses.categorical_crossentropy ,  # how will we calculate our "error." Neural network aims to minimize loss.
              metrics=['accuracy'])  # what to track

quantized_model.fit(input_train, target_train, epochs=10)

val_loss, val_acc = quantized_model.evaluate(input_test,target_test)
print('test accuracy:', val_acc)

for layer in quantized_model.layers:
  print(layer.name, layer)
print(quantized_model.layers[1].weights) #conv2d layer
  • 1
    All Keras layers have get_weights/set_weights, a quick look at the documentation would have revealed this. – Dr. Snoopy Oct 22 '21 at 11:51
  • thank you @Dr.Snoopy , but I just reframed what I was exactly looking for – venkat reddy Oct 22 '21 at 12:33
  • Hi @venkatreddy! Inline with above answer ,Attaching reference on get_weights/set_weights . Thanks. https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/save_and_serialize.ipynb –  Nov 04 '21 at 05:39

0 Answers0