2

I have two different size tensors to put in the network.

C = nn.Conv1d(1, 1, kernel_size=1, stride=2)
TC = nn.ConvTranspose1d(1, 1, kernel_size=1, stride=2)

a = torch.rand(1, 1, 100)
b = torch.rand(1, 1, 101)

a_out, b_out = TC(C(a)), TC(C(b))

The results are

a_out = torch.size([1, 1, 99]) # What I want is [1, 1, 100]
b_out = torch.size([1, 1, 101])

Is there any method to handle this problem?
I need your help.
Thanks

Roshin Raphel
  • 2,612
  • 4
  • 22
  • 40
Suho Cho
  • 575
  • 1
  • 7
  • 14

1 Answers1

2

It is expected behaviour as per documentation. May be padding can be used when even input length is detected to get same length as input.

Something like this

class PadEven(nn.Module):
    def __init__(self, conv, deconv, pad_value=0, padding=(0, 1)):
        super().__init__()
        self.conv = conv
        self.deconv = deconv
        self.pad = nn.ConstantPad1d(padding=padding, value=pad_value)

    def forward(self, x):
        nd = x.size(-1)
        x = self.deconv(self.conv(x))
        if nd % 2 == 0:
            x = self.pad(x)
        return x


C = nn.Conv1d(1, 1, kernel_size=1, stride=2)
TC = nn.ConvTranspose1d(1, 1, kernel_size=1, stride=2)
P = PadEven(C, TC)

a = torch.rand(1, 1, 100)
b = torch.rand(1, 1, 101)

a_out, b_out = P(a), P(b)
Mohsin hasan
  • 827
  • 5
  • 10