1

is there a way to show the reduced number of FLOPs of a model after pruning (prune_low_magnitude with tensorflow_model_optimization). I tried to compare the default an the pruned model, but I didn't found a way where the pruned model has less FLOPs, even the size of the model reduced pretty much. I tried it with https://pypi.org/project/model-profiler but I think it didn't ignored the zero-weights.

Or is there another good way to compare their performance?

Thank you

Thrangel
  • 11
  • 2
  • hi @trangel, welcome to SO! You are talking about TF but I'm uncertain if this is a hard requirement. Maybe worth to check out https://reposhub.com/python/deep-learning/1adrianb-pytorch-estimate-flops.html – Stereo Dec 27 '21 at 10:30
  • Please provide enough code so others can better understand or reproduce the problem. – Community Dec 27 '21 at 10:30

1 Answers1

1

I just ran into the same problem. As you mention the profiler does not ignore zero-weights, as it uses the architecture of the model. So one could implement the FLOP profiler to take into account zero-weights. But in the following post, it is mentioned that pruning does not help with the acceleration of the model inference:
Pruning does not accelerate inference

I have not validated this myself yet, but if this is the case it might be necessary to restructure the models' architecture after pruning. This should probably be done by actually removing those pruned zero-weights. After this step, the FLOP calculations should reduce, and hopefully also accelerate the model inference.

Edit (30/03/2022): I recommend having a look at the following NNI AutoML toolkit if interested in structured model pruning.

Janleo500
  • 11
  • 2