4

I am now using pytorch 0.4.0 in windows to build a CNN and here is my code:

class net(nn.Module):
    def __init__(self):
        super(net, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(1,3),stride=1 )

        self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(1,3), stride=1)

        self.dense1 = nn.Linear(32 * 28 * 24, 60)
        self.out = nn.Linear(60,3)

    def forward(self, input):
        x = F.relu(self.conv1(input))
        x = F.relu(self.conv2(x))
        x = x.view(x.size(0), -1) # flatten(batch,32*7*7)
        x = self.dense1(x)
        output = self.out(x)
        return output

but I get the error that

File "D:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward
    self.padding, self.dilation, self.groups)

RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]

I think it shows that I made some mistakes in the code above, but I don't know how to fix it, can any one help me out? Thanks in advance!

coder
  • 8,346
  • 16
  • 39
  • 53
Ddj
  • 41
  • 1
  • 2
  • When is your error produced? I cannot reproduce it, neither in the forward nor the backward pass. – McLawrence Apr 29 '18 at 10:47
  • I have the same problem (Anaconda, Python 3.6, pytorch 0.4.0) after saving the model and then reloading it. – sunside Apr 30 '18 at 16:38
  • Okay, so for me it was missing the batch dimension when I was feeding a single item. See [this](https://discuss.pytorch.org/t/expected-stride-to-be-a-single-integer-value-or-a-list-of-1-values-to-match-the-convolution-dimensions-but-got-stride-1-1/17140). – sunside May 01 '18 at 11:36
  • I found out that is because of my input data type, my data should be DoubleTensor but I used as FloatTensor, also when the dimension did not match, this error will occur. – Ddj May 01 '18 at 17:16
  • It's totally fine to answer and accept your own question btw., it might help future readers. :) – sunside May 02 '18 at 15:48

2 Answers2

2

Fine, maybe I know what's happening cause I've faced same runtime error just 4 or 5 hours ago.

Here is my solution in my case(I defined the dataset by myself):

The image I feed into the net is 1 channel, same as your code(self.conv1 = nn.Conv2d(in_channels=1,...)). And the attribute of the image which would bring runtime error is as followed:

error_img

enter image description here

The image I fixed is as followed:

fixed_img

enter image description here

Can you feel the difference? The input image's channel should be 1, so the img.shape()should be tuple! Use img.reshape(1,100,100)to fix it and the net's forward function would proceed.

I hope it can help you.

coder
  • 8,346
  • 16
  • 39
  • 53
LeLeJ
  • 21
  • 4
0

One of the reasons could be the input fed to the model for processing; input must be missing one of the dimensions.

Try torch.unsqueeze(input, 0)

imflash217
  • 193
  • 10