-2

this is the code where I was working on Image Classification using Pytorch and I'm not able to get the accuracy right. the accuracy is exceeding 100 ,can anyone help me to find the error.

     def trained_model(criterion, optimizer, epochs=5):

      epoch_loss = 0.0
      epoch_accuracy = 0
      running_loss = 0
      running_accuracy = 0
      total = 0

      for epoch in range(epochs):
        print('epoch : {}/{}'.format(epoch+1, epochs))

        for images, labels in train_loader:
          images, labels = images.to(device), labels.to(device)

          optimizer.zero_grad()

          outputs = model(images)
          loss = criterion(outputs, labels)

          _, predictions = torch.max(outputs, dim=1)

          loss.backward()
          optimizer.step()

          running_loss += loss.item()

          running_accuracy += torch.sum(predictions == labels.data)
          


        epoch_loss = running_loss / len(train_dataset)
        epoch_accuracy = running_accuracy / len(train_dataset)

        print('Loss:{:.4f} , Accuracy : {:.4f} '.format(epoch_loss, epoch_accuracy))

      return model

1 Answers1

0

You should probably use torch.argmax to get the class predictions from your model output, instead of torch.max.

Assuming you are working with indices as labels. Something like the following will get you the average accuracy of the current batch:

>>> outputs = torch.rand(16, 5)

>>> pred = torch.argmax(outputs, axis=0)
tensor([14, 11, 13, 15,  7])

>>> labels = torch.tensor([14, 6, 13, 5, 8])

>>> accuracy = (pred == labels).float().mean()
tensor(0.4000)
Ivan
  • 34,531
  • 8
  • 55
  • 100