Currently I am reading the following paper: "SqueezeNet: AlexNet-level accuracy with 50 x fewer parameters and <0.5 MB model size".
In this 4.2.3 (Activation function layer), there is the following statement:
The ramifications of the activation function is almost entirely constrained to the training phase, and it has little impact on the computational requirements during inference.
I understand the influence of activation function as follows. An activation function (ReLU etc.) is applied to each unit of the feature map after convolution operation processing. I think that processing at this time is the same processing in both the training mode and the inference mode. Why can we say that it has a big influence on training and does not have much influence on inference?
Can someone please explain it.