You clearly misunderstand the meaning of each operation and the final goal:
- final goal: classification for each pixel, i.e. softmax along the semantic class axis
- how to achieve this goal in the original code? Let's see the code line by line:
reshape = Reshape((n_classes,self.img_rows * self.img_cols))(conv9) # L1
permute = Permute((2,1))(reshape) # L2
activation = Activation('softmax')(permute) # L3
- L1's output dim =
n_class
-by-n_pixs
, (n_pixs
=img_rows
x img_cols
)
- L2's output dim =
n_pixs
-by-n_class
- L3's output dim =
n_pixs
-by-n_class
- Note the default softmax activation is applied to the last axis, i.e. the axis that
n_class
stands for, which is the semantic class axis.
Therefore, this original code fulfills the final goal of semantic segmentation.
Let's revisit the code that you want to change, which is
reshape = Reshape((self.img_rows * self.img_cols, n_classes))(conv9) # L4
- L4's output dim =
n_pixs
-by-n_class
My guess is that you think L4's output dim matches L2's, and thus L4 is a short-cut that is equivalent to executing L1 and L2.
However, matching the shape does not necessarily mean matching the physical meaning of axes. Why? A simple example will explain.
Say you have 2 semantic classes and 3 pixels. To see the difference assume all three pixels belong to the same class.
In other words, a ground truth tensor will look like this
# cls#1 cls#2
[ [0, 1], # pixel #1
[0, 1], # pixel #2
[0, 1], # pixel #3
]
Assume you have a perfect network and generate the exact response for each pixel, but your solution will create a tensor like below
# cls#1 cls#2
[ [0, 0], # pixel #1
[0, 1], # pixel #2
[1, 1], # pixel #3
]
whose shape is the same as the ground truth's, but fails to match the physical meaning of axes.
This further makes the softmax operation meaningless, because it is supposed to apply to the class dimension, but this dimension does not physically exist. As a result, it leads to the following erroneous output after applying softmax,
# cls#1 cls#2
[ [0.5, 0.5], # pixel #1
[0, 1], # pixel #2
[0.5, 0.5], # pixel #3
]
which completely mess up the training even if it is under the ideal assumption.
Therefore, it is a good habit to write down the physical meaning of each axis of a tensor. When you do any tensor reshape operation, ask yourself whether the physical meaning of an axis is changed in your expected way.
For example, if you have a tensor T
of shape batch_dim x img_rows x img_cols x feat_dim
, you can do many things and not all of them make sense (due to the problematic physical meaning of axes)
- (Wrong) reshape it to
whatever x feat_dim
, because whatever
dimension is meaningless in testing where the batch_size might be different.
- (Wrong) reshape it to
batch_dim x feat_dim x img_rows x img_cols
, because the 2nd dimension is NOT the feature dimension and neither for the 3rd and 4th dimension.
- (Correct) permute axes (3,1,2), and this will lead you the tensor of shape
batch_dim x feat_dim x img_rows x img_cols
, while keeping the physical meaning of each axis.
- (Correct) reshape it to
batch_dim x whatever x feat_dim
. This is also valid, because the whatever=img_rows x img_cols
is equivalent to the pixel location dimension, and both the meanings of batch_dim
and feat_dim
are unchanged.