11

There are a number of questions/answers regarding how to get the pixel color of an image for a given point. However, all of these answers are really slow (100-500ms) for large images (even as small as 1000 x 1300, for example).

Most of the code samples out there draw to an image context. All of them take time when the actual draw takes place:

CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage)

Examining this in Instruments reveals that the draw is being done by copying the data from the source image:

enter image description here

I have even tried a different means of getting at the data, hoping that getting to the bytes themselves would actually prove much more efficient.

NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = CGImageCreateWithImageInRect(self.CGImage, 
                           CGRectMake(pointX * self.scale, 
                                      pointY * self.scale, 
                                      1.0f, 
                                      1.0f));

CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef data = CGDataProviderCopyData(provider);

CGImageRelease(cgImage);

UInt8* buffer = (UInt8*)CFDataGetBytePtr(data);

CGFloat red   = (float)buffer[0] / 255.0f;
CGFloat green = (float)buffer[1] / 255.0f;
CGFloat blue  = (float)buffer[2] / 255.0f;
CGFloat alpha = (float)buffer[3] / 255.0f;

CFRelease(data);

UIColor *pixelColor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];

return pixelColor;

This method takes it's time on the data copy:

CFDataRef data = CGDataProviderCopyData(provider);

It would appear that it too is reading the data from disk, instead of the CGImage instance I am creating:

enter image description here

Now, this method, in some informal testing does perform better, but it is still not as fast I want it to be. Does anyone know of an even faster way of getting the underlying pixel data???

Wayne Hartman
  • 18,369
  • 7
  • 84
  • 116

4 Answers4

6

If it's possible for you to draw this image to the screen via OpenGL ES, you can get extremely fast random access to the underlying pixels in iOS 5.0 via the texture caches introduced in that version. They allow for direct memory access to the underlying BGRA pixel data stored in an OpenGL ES texture (where your image would be residing), and you could pick out any pixel from that texture almost instantaneously.

I use this to read back the raw pixel data of even large (2048x2048) images, and the read times are at worst in the range of 10-20 ms to pull down all of those pixels. Again, random access to a single pixel there takes almost no time, because you're just reading from a location in a byte array.

Of course, this means that you'll have to parse and upload your particular image to OpenGL ES, which will involve the same reading from disk and interactions with Core Graphics (if going through a UIImage) that you'd see if you tried to read pixel data from a random PNG on disk, but it sounds like you just need to render once and sample from it multiple times. If so, OpenGL ES and the texture caches on iOS 5.0 would be the absolute fastest way to read back this pixel data for something also displayed onscreen.

I encapsulate these processes in the GPUImagePicture (image upload) and GPUImageRawData (fast raw data access) classes within my open source GPUImage framework, if you want to see how something like that might work.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
4

I have yet to find a way to get access to the drawn (in frame buffer) pixels. The fastest method I've measured is:

  1. Indicate you want the image to be cached by specifying kCGImageSourceShouldCache when creating it.
  2. (optional) Precache the image by forcing it to render.
  3. Draw the image a 1x1 bitmap context.

The cost of this method is the cached bitmap, which may have a lifetime as long as the CGImage it is associated with. The code ends up looking something like this:

  1. Create image w/ ShouldCache flag

    NSDictionary *options = @{ (id)kCGImageSourceShouldCache: @(YES) };
    CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
    CGImageRef cgimage = CGImageSourceCreateImageAtIndex(imageSource, 0, (__bridge CFDictionaryRef)options);
    UIImage *image = [UIImage imageWithCGImage:cgimage];
    CGImageRelease(cgimage);
    
  2. Precache image

    UIGraphicsBeginImageContext(CGSizeMake(1, 1));
    [image drawAtPoint:CGPointZero];
    UIGraphicsEndImageContext();
    
  3. Draw image to a 1x1 bitmap context

    unsigned char pixelData[] = { 0, 0, 0, 0 };
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGImageRef cgimage = image.CGImage;
    int imageWidth = CGImageGetWidth(cgimage);
    int imageHeight = CGImageGetHeight(cgimage);
    CGContextDrawImage(context, CGRectMake(-testPoint.x, testPoint.y - imageHeight, imageWidth, imageHeight), cgimage);
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    

pixelData has the R, G, B, and A values of the pixel at testPoint.

darrinm
  • 9,117
  • 5
  • 34
  • 34
1

A CGImage context is possibly nearly empty and contains no actual pixel data until you try to read the first pixel or draw it, so trying to speed up getting pixels from an image might not get you anywhere. There's nothing to get yet.

Are you trying to read pixels from a PNG file? You could try going directly after the file and mmap'ing it and decoding the PNG format yourself. It will still take awhile to pull the data from storage.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • The image has been painted on the screen, I would figure that its pixel data is already somewhere in memory. – Wayne Hartman May 02 '12 at 00:43
  • Graphics/GPU/display memory is not the same as CPU memory. You can't read the display's memory. It's opaque and outside the app's sandbox. – hotpaw2 May 02 '12 at 01:19
-2
- (BOOL)isWallPixel: (UIImage *)image: (int) x :(int) y {

CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);

int pixelInfo = ((image.size.width  * y) + x ) * 4; // The image is png

//UInt8 red = data[pixelInfo];         // If you need this info, enable it
//UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
//UInt8 blue = data[pixelInfo + 2];    // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3];     // I need only this info for my maze game
CFRelease(pixelData);

//UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info

if (alpha) return YES;
else return NO;
}
Vitaly
  • 7