As shown in the following code, after I put the input image into the model to obtain the output, I want to use argmax to reduce the dimensionality, but the results after argmax all become 0. What is the reason for this, should I use other methods for dimensionality reduction? Or change my model?
newModel = CNNSEG()
newModel.load_state_dict(torch.load(PATH))
newModel.eval()
for iteration, sample in enumerate(test_data_loader):
img = sample
# output the results
inputs = img.unsqueeze(1)
outputs = newModel(inputs) # shape: [2, 4, 96, 96]
print('outputs',outputs)
# argmax
#print(outputs.shape)
out = torch.argmax(outputs, dim=1) # shape: [2,96,96]
print('out',out)
The result of out is shown below, but all have values in outputs.
tensor([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]])