6
transform = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])

Before applying the transformation

Before applying the transformation

After applying the transformation

After applying the transformation

Q.1 Why the pixel values are changed?
Q.2 How to correct this?

atin
  • 985
  • 3
  • 11
  • 28

2 Answers2

4

Q1: transforms.ToTensor() of torchvision normalizes your input image, i.e. puts it in the range [0,1] as it is a very common preprocessing step.

Q2: use torch.tensor(input_image) to convert image into a tensor instead.

Noé Achache
  • 195
  • 2
  • 9
  • 2
    It doesn't work, and even if transforms.ToTensor() normalizes the input image the relative values of pixels should not change, but the bright pixels become completely dark when performing the transform. – atin Apr 26 '20 at 11:41
3

I was able to solve this problem by normalizing the input data before transforming it.
The problem was that ToPILImage() was discarding all the values which were greater than 1 hence the bright pixels became dark.

atin
  • 985
  • 3
  • 11
  • 28
  • Could you post the code here, please? That link is dead. – CrazyChucky Mar 05 '22 at 10:27
  • Sorry for the VERY late reply, I wasn't active on SO for some time. Also unfortunately I don't have the code now, although iirc i just normalized the input image using [transforms.Normalize](https://pytorch.org/vision/stable/generated/torchvision.transforms.Normalize.html) @CrazyChucky – atin Nov 19 '22 at 22:04