1

Here is the documentation on CGImageCreateWithImageInRect.

It takes a CGRect object as a parameter. From what I understand a CGRect is in points

However the documentation says:

"References the pixels within the resulting rectangle, treating the first pixel within the rectangle as the origin of the subimage."

This seemed inaccurate and was proven so when I needed to resize my UIImages. I needed to multiply the image dimensions by the screen scale otherwise my image came out the wrong size

var imageRef:CGImageRef = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(0, 0, image.size.width*UIScreen.mainScreen().scale, image.size.height*UIScreen.mainScreen().scale))

If I didn't multiply by scale the image came out too small.

Am I right that this is bad documentation (as in it shouldnt take a CGRect which is in points, and then read it as pixels) or am I not understanding something fundamental here?

Rob Napier
  • 286,113
  • 34
  • 456
  • 610
Aggressor
  • 13,323
  • 24
  • 103
  • 182
  • Whenever anything says Pixels, you should interpret it as Points (pixels * scale) – David Berry Feb 12 '15 at 04:25
  • Put another way "pixel" == "virtual pixel" – David Berry Feb 12 '15 at 04:25
  • Yes a point is pixels*scale. The problem is its taking points as the argument and then reading them AS pixels! – Aggressor Feb 12 '15 at 04:26
  • 2
    CGRect, CGPoint, CGSize, etc., are all points. All measurements in CoreGraphics and/up is always in points. The only time it's in pixels is when you're building a CGBitmapContext. – David Berry Feb 12 '15 at 05:33
  • As David said said Core Graphics basic structures and functionalities doesn't know about logical points, works only with physical points – Andrea Feb 12 '15 at 07:20
  • Right, so shouldnt you pass in raw values for pixels instead of a GCRect, thats what the 'point' (no pun intended) of my question is, I think their method signature is bad. – Aggressor Feb 12 '15 at 19:27
  • I'm not sure any of this "pixel" means "point" is correct. – Ian Apr 16 '16 at 18:47

2 Answers2

0

I think the documentation is correct (pixels are pixels) based on this test:

var filename = "InterfaceButtons_Scores@2x.png"
var path = NSBundle.mainBundle().pathForResource(filename,ofType:nil)
var dataProvider = CGDataProviderCreateWithFilename(path!)
var cgimg = CGImageCreateWithPNGDataProvider(dataProvider, nil, false, .RenderingIntentDefault)!

print("size of 100x100px file loaded into CGImage")
print(CGImageGetHeight(cgimg));
print(CGImageGetWidth(cgimg));

var croppedImg = CGImageCreateWithImageInRect(cgimg, CGRect(x: 0, y: 0, width: 100, height: 100))

print("size of CGImage created with 100x100 Rect from 100x100 image")
print(CGImageGetHeight(croppedImg));
print(CGImageGetWidth(croppedImg));


filename = "InterfaceButtons_Scores@1x.png"
path = NSBundle.mainBundle().pathForResource(filename,ofType:nil)
dataProvider = CGDataProviderCreateWithFilename(path!)
cgimg = CGImageCreateWithPNGDataProvider(dataProvider, nil, false, .RenderingIntentDefault)!

print("size of 51x51px file loaded into CGImage")
print(CGImageGetHeight(cgimg));
print(CGImageGetWidth(cgimg));

croppedImg = CGImageCreateWithImageInRect(cgimg, CGRect(x: 0, y: 0, width: 100, height: 100))

print("size of CGImage created with 100x100 Rect from 51x51 image")
print(CGImageGetHeight(croppedImg));
print(CGImageGetWidth(croppedImg));

output:

size of 100x100px file loaded into CGImage
100
100
size of CGImage created with 100x100 Rect from 100x100 image
100
100
size of 51x51px file loaded into CGImage
51
51
size of CGImage created with 100x100 Rect from 51x51 image
51
51

I'm not sure what to say about your UIImage resizing without knowing exactly how you're trying to resize them and the exact results you saw. I do know that if you created the UIImage from an asset in an asset catalog (using UIImage(named:)), the actual pixel dimensions of the CGImage property will depend on the scale factor of the device or simulator. If there are multiple sizes for the same asset, the UIImage will load whichever asset matches the system's scale factor. In other words, in this scenario, you can't count on the UIImage's CGImage having consistent dimensions, and scaling code may go awry.

Ian
  • 592
  • 5
  • 21
0

From what I understand a CGRect is in points

CGRect is "a structure that contains the location and dimensions of a rectangle." It's based on CGSize. That's defined as "a structure that contains width and height values." Within that, width is defined only as "a width value."

Nowhere in the documentation of CGSize (or CGRect) does it tell you how these values are to be interpreted. It never says "these are points" or "these are pixels" or "these are millimeters." That's up to the caller. The docs for CGSize warn you that it may be used as a vector and the "size" may be negative. The docs for CGRect warn you that the origin may be in different places. (CGRect is used for both UIKit and Core Graphics, which have different coordinate systems.)

I don't think there's anything wrong with the documentation here. CGRect doesn't promise anything about scaling.

That said, you should not over-read the use of words like "point" and "pixel" as always meaning "related to screen scale," particularly in older documentation. The word "point" often just means "location in an image" in the docs (and that's what it means here). This particular function goes back to OS X 10.4. In the absence of a CGContext, "screen scale" doesn't mean much anyway. This function works on raw image bitmap data.

Rob Napier
  • 286,113
  • 34
  • 456
  • 610