-1

I am trying to train a CNN in PyTorch on MNIST data. However, I am getting ValueError: Expected input batch_size (500) to match target batch_size (1000). This occurs when I run the test() command in the code below. I have looked up solutions to this problem but none of them help fix this issue.

My code is as follows:

n_epochs = 20
batch_size_train = 64
batch_size_test = 1000
learning_rate = 1e-4
log_interval = 50

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 64, kernel_size=5)
        self.conv2 = nn.Conv2d(64, 128, kernel_size=1)
        self.fc1 = nn.Linear(9216, 100)
        self.fc2 = nn.Linear(100, 10)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2(x), 2))
        x = x.view(-1, 9216)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
    
    def loss_function(self, out, target):
        return F.cross_entropy(out, target)

def init_weights(m):
    if type(m) == nn.Linear or type(m) == nn.Conv2d:
        torch.nn.init.xavier_uniform_(m.weight)
        m.bias.data.fill_(0.01)

network = Net()
network.apply(init_weights)
network.cuda()

optimizer = optim.Adam(network.parameters(), lr=1e-4)

def train(epoch):
  network.train()
  for batch_idx, (data, target) in enumerate(train_loader):
    data = data.cuda()
    target = target.cuda()
    optimizer.zero_grad()
    output = network(data)
    loss = network.loss_function(output, target)
    loss.backward()
    optimizer.step()
    if batch_idx % log_interval == 0:
      print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
        epoch, batch_idx * len(data), len(train_loader.dataset),
        100. * batch_idx / len(train_loader), loss.item()))

def test():
  network.eval()
  test_loss = 0
  correct = 0
  with torch.no_grad():
    for data, target in test_loader:
      data = data.cuda()
      target = target.cuda()
      target = target.view(batch_size_test)
      output = network(data)
      test_loss += network.loss_function(output, target).item()
      pred = output.data.max(1, keepdim=True)[1]
      correct += pred.eq(target.data.view_as(pred)).sum()
  test_loss /= len(test_loader.dataset)
  print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
    test_loss, correct, len(test_loader.dataset),
    100. * correct / len(test_loader.dataset)))

test()
for epoch in range(1, n_epochs + 1):
  train(epoch)
  test()

Full error log:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-ef6e122ea50c> in <module>()
----> 1 test()
      2 for epoch in range(1, n_epochs + 1):
      3   train(epoch)
      4   test()

3 frames
<ipython-input-9-23a4b65d1ae9> in test()
      9       target = target.view(batch_size_test)
     10       output = network(data)
---> 11       test_loss += network.loss_function(output, target).item()
     12       pred = output.data.max(1, keepdim=True)[1]
     13       correct += pred.eq(target.data.view_as(pred)).sum()

<ipython-input-5-d97bf44ef6f0> in loss_function(self, out, target)
     91 
     92     def loss_function(self, out, target):
---> 93         return F.cross_entropy(out, target)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2466     if size_average is not None or reduce is not None:
   2467         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2469 
   2470 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   2260     if input.size(0) != target.size(0):
   2261         raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2262                          .format(input.size(0), target.size(0)))
   2263     if dim == 2:
   2264         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

ValueError: Expected input batch_size (500) to match target batch_size (1000).

Please let me know how to fix this. Thanks, Vinny

Vinny Jacobsen
  • 127
  • 2
  • 9

1 Answers1

1

You data has the following shape [batch_size, c=1, h=28, w=28]. batch_size equals 64 for train and 1000 for test set, but that doesn't make any difference, we shouldn't deal with the first dim.

To use F.cross_entropy, you must provide a tensor of size [batch_size, nb_classes], here nb_classes is 10. So the last layer of your model should have a total of 10 neurons.

As a side note, when using this criterion you shouldn't use F.log_softmax on the model's output (see here).

This criterion combines log_softmax and nll_loss in a single function.

This is not the issue though. The problem is your model doesn't ouput [batch_size, 10] tensors. The problem is your use of view: the tensor goes from torch.Size([64, 128, 6, 6]) to torch.Size([32, 9216]). You've basically said "squash everything to a total of 9216 (128*6*6*64/2) on dim=1 and let the rest (32) stay on dim=0". This is not desired since your messing up the batches. It's easier to use a Flatten layer in this particular instance after your CNN layers. This will flatten all values from each channels. Make sure to preserve the first dimension though with start_dim=1.

Here's an example, meant to show, layers are random but the code runs. You should tweak the kernel sizes, number of channels etc... to your liking!

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, kernel_size=4)
        self.conv2 = nn.Conv2d(32, 32, kernel_size=8)
        self.fc1 = nn.Linear(128, 100)
        self.fc2 = nn.Linear(100, 10)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2(x), 2))
        x = torch.flatten(x, start_dim=1)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x
Ivan
  • 34,531
  • 8
  • 55
  • 100