I want to compute the reconstruction accuracy of my autoencoder using CrossEntropyLoss:
ae_criterion = nn.CrossEntropyLoss()
ae_loss = ae_criterion(X, Y)
where X
is the autoencoder's reconstruction and Y
is the target (since it is an autoencoder, Y
is the same as the original input X
).
Both X
and Y
have shape [42, 32, 130] = [batch_size, timesteps, number_of_classes]
. When I run the code above I get the following error:
ValueError: Expected target size (42, 130), got torch.Size([42, 32, 130])
After looking the docs, I'm still unsure on how should I call nn.CrossEntropyLoss()
in the appropriate way. It seems that I should change Y to be of shape [42, 32, 1]
, with each element being a scalar in the interval [0, 129]
(or [1, 130]
), am I right?
Is there a way to avoid this? Since X
and Y
are between 0
and 1
, could I just use binary cross-entropy loss element-wise in an equivalent way?