1

I'm writing some tests to detect changes to the lossless image formats (starting with PNG) and finding that on Linux and Windows the image loading mechanisms work as expected - but on iOS (haven't tried on macOS) the image data is always being very slightly changed if I load from a PNG file on disk or save to a PNG file on disk using Apples' methods.

If I create a PNG using any number of tools (GIMP/Paint.NET/whatever) and I use my cross platform PNG reading code to examine each pixel of the resulting loaded data - it matches exactly what I did in the tool (or programmatically generated with my cross platform PNG writing code.) Subsequent reloading into the creation tools yields the exactly same RGBA8888 components.

If I load the PNG from disk using Apple's:

NSString* pPathToFile = nsStringFromStdString( sPathToFile );
UIImage* pImageFromDiskPNG = [UIImage imageWithContentsOfFile:pPathToFile];

...then examine the resulting pixels it's similar but not the same. I would expect, like on other platforms, for the data to be identical.

Now, interestingly, if I load the data from the PNG using my code, and creating a UIImage with it (using some code I show below) I can use that UIImage and display it, copy it, whatever, and if I examine the pixel data - it's exactly what I gave it to begin with (which is why I think it's the loading saving part where Apple is modifying the image data.)

When I instruct it to save what I know to be a good UIImage with perfect pixel data, and then load that Apple saved image with my PNG loading code, I can see it's not exactly the same data. I have used several methods by which Apple suggests to save UIImage's to PNG (UIImagePNGRepresentation primarily.)

The only thing I can really think of is that Apple when loading or saving on iOS doesn't truly support RGBA8888 and is doing some sort of premultiply with the alpha channel - I speculate about this because when I first started using the code I posted below I was choosing

kCGImageAlphaLast

...instead of what I ultimately had to use

kCGImageAlphaPremultipliedLast

because the former is not supported on iOS for some reason.

Does anyone have any experience around this issue on iOS?

Cheers!

The code I use to push/pull RGBA8888 data into and out of UIImages is below:

    - (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage*)image dataSize:(NSUInteger*)dataSize
    {
        CGImageRef imageRef = image.CGImage;

        // Create a bitmap context to draw the uiimage into
        CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];

        if(!context) {
            return NULL;
        }

        size_t width = CGImageGetWidth(imageRef);
        size_t height = CGImageGetHeight(imageRef);

        CGRect rect = CGRectMake(0, 0, width, height);

        // Draw image into the context to get the raw image data
        CGContextDrawImage(context, rect, imageRef);

        // Get a pointer to the data
        unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);

        // Copy the data and release the memory (return memory allocated with new)
        size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
        size_t bufferLength = bytesPerRow * height;

        unsigned char *newBitmap = NULL;

        if(bitmapData) {
            *dataSize = sizeof(unsigned char) * bytesPerRow * height;
            newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);

            if(newBitmap) {    // Copy the data
                for(int i = 0; i < bufferLength; ++i) {
                    newBitmap[i] = bitmapData[i];
                }
            }

            free(bitmapData);

        } else {
            NSLog(@"Error getting bitmap pixel data\n");
        }

        CGContextRelease(context);

        return newBitmap;
    }


    - (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image
    {
        CGContextRef context = NULL;
        CGColorSpaceRef colorSpace;
        uint32_t *bitmapData;

        size_t bitsPerPixel = 32;
        size_t bitsPerComponent = 8;
        size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;

        size_t width = CGImageGetWidth(image);
        size_t height = CGImageGetHeight(image);

        size_t bytesPerRow = width * bytesPerPixel;
        size_t bufferLength = bytesPerRow * height;

        colorSpace = CGColorSpaceCreateDeviceRGB();

        if(!colorSpace) {
            NSLog(@"Error allocating color space RGB\n");
            return NULL;
        }

        // Allocate memory for image data
        bitmapData = (uint32_t *)malloc(bufferLength);

        if(!bitmapData) {
            NSLog(@"Error allocating memory for bitmap\n");
            CGColorSpaceRelease(colorSpace);
            return NULL;
        }

        //Create bitmap context
        context = CGBitmapContextCreate( bitmapData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );

        if( !context )
        {
            free( bitmapData );
            NSLog( @"Bitmap context not created" );
        }

        CGColorSpaceRelease( colorSpace );

        return context;
    }


    - (UIImage*) convertBitmapRGBA8ToUIImage:(unsigned char*) pBuffer withWidth:(int) nWidth withHeight:(int) nHeight
    {
        // Create the bitmap context
        const size_t nColorChannels = 4;
        const size_t nBitsPerChannel = 8;
        const size_t nBytesPerRow = ((nBitsPerChannel * nWidth) / 8) * nColorChannels;

        CGColorSpaceRef oCGColorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGContextRef oCGContextRef = CGBitmapContextCreate( pBuffer, nWidth, nHeight, nBitsPerChannel, nBytesPerRow ,  oCGColorSpaceRef, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );

        // create the image:
        CGImageRef toCGImage = CGBitmapContextCreateImage(oCGContextRef);
        UIImage* pImage = [[UIImage alloc] initWithCGImage:toCGImage];

        return pImage;
    }
WTH
  • 97
  • 1
  • 7

2 Answers2

0

Based on your source code, it appears that you are using BGRA (RGB + alpha channel) data which is imported from PNG source images. When you attach images to an iOS project, Xcode will pre-process each image to pre-multiply the RGB and A channel data for performance reasons. So, by the time the image is loaded on the iPhone device, the RGB values for non-opaque (not A = 255) pixels can be changed. The RGB numbers are modified, but the actual image data will come out the same when rendered to the screen by iOS. This is known as "straight alpha" vs "pre-multiplied alpha".

MoDJ
  • 4,309
  • 2
  • 30
  • 65
  • I wrote this blog post on the subject, if you are interested: http://www.modejong.com/blog/post3_pixel_binary_layout_w_premultiplied_alpha/ – MoDJ Apr 04 '19 at 19:11
0

Store image data directly, don't use UIImage.pngData() to convert image to data, this method will change the pixel's rgb value if the pixel has alpha channel.

Changwei
  • 672
  • 1
  • 5
  • 16