1

The MobileNet V2 paper introduced inverted residual with linear bottleneck block. It is shown diagrammatically in Fig. 3(b) (Same is shown in Fig. 3 of MobileNetV3 paper) and mathematically in Table 1 of this paper.
Fig. 3(b) Table 1 Pytorch Inverted residual Block

Fig. 3(b) shows that the shortcut connection is coming from the input bottleneck (linear bottleneck) to the output bottleneck created from ReLU6,1*1 convolution, which is wrong as ReLU6 squashes too much information if dimensionality reduction happens
But Table 1 shows that the shortcut connection is coming from the input bottleneck to the output bottleneck (linear bottleneck) which is created by 1*1 convolution without ReLU6. The pytorch MobilenetV2 model follows this same table.

So can I understand that this Fig.3(b) is drawn wrongly? It should be like the Table 1 in the paper?

0 Answers0