0

I'm referring mainly to Bilinear and Bicubic resampling techniques. Both of these methods involve sampling 4 and 16 pixels respectively, the former being the 2x2 of pixels closest to the new pixel, and the latter being the 4x4 closest to it.

Why do we limit the methods to this many pixels though? For example if I'm using Bilinear resampling and the following 32x32 image (the red blocks are 2x2 in size, black lines and dots represent the nearest pixel and overlay the new image size) and I want to scale it down to 2x2 the result should be a solid red image even though the source image is almost entirely blue 32x32 image to 2x2 image would it not make more sense to sample ALL pixels that are "mapped" to the new image size so that you get a more accurate representation of color in that area? Because the limits used on these methods seem to me like they are just Nearest neighbor with extra steps. Even in a case where you're only scaling down a small amount (greater than a half), wouldn't you want to weight the source pixels by how much they are overlapping so that 3 red pixels that have only 0.05% of each of them mapping to the new pixel and 1 blue pixel with 0.85% mapping don't drown it out?

J. Doe
  • 127
  • 9
  • I'm not especially familiar with downsampling edge-cases, but my instinct is that you're basically right that bilinear down-sampling gives a non-intuitive result for this input, and a more sophisticated downsample (e.g. with weighting-by-overlap) would resolve the issue. However, the downsampling method you propose would generate pixels of a color that aren't present in the original image, which is another sort of abberation. – afarley Feb 17 '20 at 05:35
  • @afarley Bilinear will almost always generate pixels of colour that aren't present though, even with a "normal" input like a picture of a lake, your 4 blues that aren't quite the same will be interpolated to a new blue that could be different than all of the original blues. my main point I guess was that with a sufficiently large image being downsampled into a very small one there is data that is not being used and the "better" sampling method approaches the effectiveness of nearest neighbor, the only difference being, as you pointed out, no or very few original colours will be kept – J. Doe Feb 17 '20 at 05:40

1 Answers1

0

Correct, if you just do naive bilinear or bicubic interpolation when downsampling, you can get aliasing artifacts. Bilinear and bicubic interpolation make more sense when upsampling, rotating, or otherwise transforming an image without decreasing the size.

If people are doing this in practice, it's usually a tradeoff between correctness and performance. Depending on the type of images you're scaling down, it may not matter much.

If you want to have more correct results, you typically want to blur the image before downsampling. There's a bit more information about it here, but the basic idea is that the blur acts to remove high-frequency content from the image, which then avoids the aliasing.

tfinniga
  • 6,693
  • 3
  • 32
  • 37