1

I have a UIImage extension that can change the color of it's image that I pulled off somewhere. The problem is that it downgrades it's resolution after it colors the image. I've seen other answers based on this, but I'm not sure how to adapt this to rendering a retina image in this instance:

extension UIImage {

    func maskWithColor(color: UIColor) -> UIImage? {
        let maskImage = cgImage!

        let width = size.width
        let height = size.height
        let bounds = CGRect(x: 0, y: 0, width: width, height: height)

        let colorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)

        let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!

        context.clip(to: bounds, mask: maskImage)
        context.setFillColor(color.cgColor)
        context.fill(bounds)

        if let cgImage = context.makeImage() {
            let coloredImage = UIImage(cgImage: cgImage)
            return coloredImage
        } else {
            return nil
        }
    }

}

I've seen people using UIGraphicsBeginImageContextWithOptions and setting it's scale to the main screen, but I don't think it works if I'm using the CGContext function.

Chewie The Chorkie
  • 4,896
  • 9
  • 46
  • 90

1 Answers1

2

I think you want:

    let width = size.width * scale
    let height = size.height * scale

and:

        let coloredImage = UIImage(cgImage: cgImage, scale:scale, orientation:.up)

(You may need to use imageOrientation instead of .up.)

Ken Thomases
  • 88,520
  • 7
  • 116
  • 154
  • Is scale supposed to be UIScreen.main.scale? Because that makes it larger. – Chewie The Chorkie Jul 09 '18 at 19:27
  • In my original code, it drew it the right size, just not retina, so it was visibly pixelated. – Chewie The Chorkie Jul 09 '18 at 19:29
  • 1
    `scale` is supposed to be the `scale` property of the `UIImage` object on which this extension is acting. The issue is that `CGContext` works with pixels, not "points". A `UIImage`'s `size` is in points, not pixels. Multiplying by `scale` gets you pixels, which are suitable for creating the `CGContext` with the same number of pixels as the original. Then, passing the `scale` when creating the new `UIImage` lets it know that its size in points should be different than the `CGImage`'s size in pixels. It's possible that it should be `1.0 / scale`. – Ken Thomases Jul 09 '18 at 21:12
  • That did it! It was important to set the scale factor of 2 for the height and width, and the scale of the cgImage by the same factor. It doesn't quite translate in my head, because a larger scale factor makes it smaller, but higher definition. – Chewie The Chorkie Jul 10 '18 at 15:27
  • 1
    The scale factor is how many pixels per point. In the first case (multiplying width and height by scale), you're converting from points to pixels. When creating the new `UIImage`, you're telling the system how to convert the size of the `CGImage`, which is in pixels, to the size of the `UIImage` in points. – Ken Thomases Jul 10 '18 at 16:44