4

I'm trying to draw a standard NSImage in white instead of black. The following works fine for drawing the image in black in the current NSGraphicsContext:

NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];

I expected NSCompositeXOR to do the trick, but no. Do I need to go down the complicated [CIFilter filterWithName:@"CIColorInvert"] path? I feel like I must be missing something simple.

General Grievance
  • 4,555
  • 31
  • 31
  • 45
smartgo
  • 160
  • 2
  • 6

3 Answers3

6

The Core Image route would be the most reliable. It's actually not very complicated, I've posted a sample below. If you know none of your images will be flipped then you can remove the transform code. The main thing to be careful of is that the conversion from NSImage to CIImage can be expensive performance-wise, so you should ensure you cache the CIImage if possible and don't re-create it during each drawing operation.

CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
    CGRect cgRect    = [ciImage extent];
    CGAffineTransform transform;
    transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
    transform = CGAffineTransformScale(transform, 1.0, -1.0);
    ciImage   = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:@"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:@"inputImage"];
CIImage* output = [filter valueForKey:@"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];

Note: release/retain memory management is left as an exercise, the code above assumes garbage collection.

If you want to render the image at an arbitrary size, you could do the following:

NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];
Rob Keniger
  • 45,830
  • 6
  • 101
  • 134
  • Thanks, that does solve the color inversion issue. Unfortunately, it doesn't give me the full resolution: it appears that the initial TIFFRepresentation is much smaller than the size I want to draw it in. How can I make sure I get the data at a high resolution? – smartgo Jan 27 '10 at 23:53
  • You could create an `NSBitmapImageRep` from the `NSImage` using the size you want and then create the `CIImage` from the bitmap, see here: http://pastie.org/798088 – Rob Keniger Jan 28 '10 at 01:31
  • No worries. I added the code from pastie.org to my answer so that people don't have to go hunting. – Rob Keniger Jan 28 '10 at 06:47
  • -[NSImage lockFocus] modifies the original image. It's like drawing the image in a context set up like a window the size of the image, leaving focus locked there for more drawing, then snapshotting the result back into the image in unlockFocus. You'll lose data here if the original image is anything other than a bitmap with one slice. It's also quite inefficient - lockFocus creates a new buffer, as described. Then TIFFRepresentation creates an uncompressed copy of your image - basically another buffer. Then you initialize CIImage with that, another copy to decode the encoded TIFF data. – Ken Apr 20 '10 at 23:43
  • Use -CGImageForProposedRect:context:hints:. No copies. – Ken Apr 20 '10 at 23:43
  • Thanks for the info. Unfortunately, that method is only available in 10.6. Is there a better way in Leopard? – Rob Keniger Apr 21 '10 at 00:27
3

Here is a solution using Swift 5.1, somewhat based on the above solutions. Note that I am not cacheing the images, so it likely isn't the most efficient as my primary use case is to flip small monochrome images in toolbar buttons based on whether the current color scheme is light or dark.

import os
import AppKit
import Foundation

public extension NSImage {

    func inverted() -> NSImage {
        guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
            os_log(.error, "Could not create CGImage from NSImage")
            return self
        }

        let ciImage = CIImage(cgImage: cgImage)
        guard let filter = CIFilter(name: "CIColorInvert") else {
            os_log(.error, "Could not create CIColorInvert filter")
            return self
        }

        filter.setValue(ciImage, forKey: kCIInputImageKey)
        guard let outputImage = filter.outputImage else {
            os_log(.error, "Could not obtain output CIImage from filter")
            return self
        }

        guard let outputCgImage = outputImage.toCGImage() else {
            os_log(.error, "Could not create CGImage from CIImage")
            return self
        }

        return NSImage(cgImage: outputCgImage, size: self.size)
    }
}

fileprivate extension CIImage {
    func toCGImage() -> CGImage? {
        let context = CIContext(options: nil)
        if let cgImage = context.createCGImage(self, from: self.extent) {
            return cgImage
        }
        return nil
    }
}
Steven W. Klassen
  • 1,401
  • 12
  • 26
0

Just one note: I found that CIColorInvert filter isn't always reliable. For example, if you want to invert back an image inverted in Photoshop, the CIFilter will produce a much lighter image. As far as I understood, it happens because of the differences in gamma value of CIFilter (gamma is 1) and images that came from other sources.

While I was looking for ways to change the gamma value for CIFilter, I found a note that there's a bug in CIContext: changing its gamma value from the default 1 will produce unpredictable results.

Regardless, there's another solution to invert NSImage, which always produces the correct results - by inverting pixels of NSBitmapImageRep.

I'm reposting the code from etutorials.org (http://bit.ly/Y6GpLn):

// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8;           // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;

NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc] 
      initWithBitmapDataPlanes:NULL
          pixelsWide:w
          pixelsHigh:h
          bitsPerSample:8
          samplesPerPixel:n
          hasAlpha:[srcImageRep hasAlpha] 
          isPlanar:NO
          colorSpaceName:[srcImageRep colorSpaceName]
          bytesPerRow:rowBytes 
          bitsPerPixel:NULL] autorelease];

unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];

for ( i = 0; i < rowBytes * h; i++ )
    *(destData + i) = 255 - *(srcData + i);

[destImage addRepresentation:destImageRep];
Zevrix
  • 13
  • 2