I have defined my autoencoder in pytorch as following (it gives me a 8-dimensional bottleneck at the output of the encoder which works fine torch.Size([1, 8, 1, 1])):
self.encoder = nn.Sequential(
nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(64, 8, kernel_size=3, stride=1),
nn.ReLU(),
nn.MaxPool2d(7, stride=1)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(8, 64, kernel_size=3, stride=1),
nn.ReLU(),
nn.Conv2d(64, 32, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(32, input_shape[0], kernel_size=8, stride=4),
nn.ReLU(),
nn.Sigmoid()
)
What I cannot do is train the autoencoder with
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
The decoder gives me an error that the decoder cannot upsample the tensor:
Calculated padded input size per channel: (3 x 3). Kernel size: (4 x 4). Kernel size can't be greater than actual input size