-2

I try to use GPUImage to resize a image mask to the size of an given image

-(void)imageTaken:(UIImage *)image
{
    [super imageTaken:image];


    CGFloat sx = image.size.width / self.imageMask.size.width;
    CGFloat sy = image.size.height / self.imageMask.size.height;


    GPUImageTransformFilter *scaleFilter = [[GPUImageTransformFilter alloc] init];
    CATransform3D t3d = CATransform3DMakeScale(sx, sy, 1);
    scaleFilter.transform3D = t3d;

    GPUImagePicture *scaleImageSource = [[GPUImagePicture alloc] initWithImage:self.imageMask];

    [scaleImageSource addTarget:scaleFilter];

    [scaleImageSource processImage];

    UIImage *scaledMaskImage = [scaleFilter imageFromCurrentlyProcessedOutput] ;


    NSLog(@" sx: %f, sy : %f", sx,sy);
    NSLog(@"[image size] : %@", NSStringFromCGSize([image size]));
    NSLog(@"[_imageMask size] : %@", NSStringFromCGSize([_imageMask size]));
    NSLog(@"[scaledMaskImage size] : %@", NSStringFromCGSize([scaledMaskImage size]));


    [self.delegate photo:image
               imageMask:scaledMaskImage takenOnPhotoMaskViewController:self];

}

Output:

sx: 1.500000, sy : 1.126761
[image size] : {480, 640}
[_imageMask size] : {320, 568}
[scaledMaskImage size] : {640, 1136}

By what I understand [scaledMaskImage size]should be {480, 640}, as sx is 1.5 and sy is 1.126761, but as it is {640, 1136}, its as scaled by 2.0. What did I do wrong?

vikingosegundo
  • 52,040
  • 14
  • 137
  • 178

1 Answers1

4

If what you're looking to do is to downsample a given image, I wouldn't use the above approach. Instead, I'd recommend using a GPUImageLanczosResamplingFilter for your filter and calling -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on it. Forcing processing at a size is how you resize filtered images in their pixel dimensions, and the Lanczos resampling gives you much higher quality results.

A transform filter will transform the image within the original pixel size, so it can shrink or position an image within an overall scene, but it's not what you would use to adjust the overall dimensions of an image. The final pixel size of the image you're getting out of the above is the initial pixel size of the UIImage you originally fed into this process.

As for why it's twice what you expect, I would hazard a guess that this has to do with points vs. pixel differences for a Retina display. You might be checking sizes of a UIKit element in points, and creating a UIImage from that. The UIImage will have actual pixel dimensions that are 2X the point size in either dimension.

Again, using -forceProcessingAtSize: will set the exact pixel dimensions of your output to what you want.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571