1

I can't believe my eyes, they are basically the same code, just convert the Object-c code to swift code, but the Object-c code always gets the right answer, but the swift code sometimes gets the right answer, sometimes gets wrong.

The Swift rendition:

class ImageProcessor1 {
    class func processImage(image: UIImage) {
        guard let cgImage = image.cgImage else {
            return
        }
        let width = Int(image.size.width)
        let height = Int(image.size.height)
        let bytesPerRow = width * 4
        let imageData = UnsafeMutablePointer<UInt32>.allocate(capacity: width * height)
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
        guard let imageContext = CGContext(data: imageData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
            return
        }
        imageContext.draw(cgImage, in: CGRect(origin: .zero, size: image.size))
        print("---------data from Swift version----------")
        for i in 0..<width * height {
            print(imageData[i])
        }
    }
}

The Objective-C rendition:

- (UIImage *)processUsingPixels:(UIImage*)inputImage {

  // 1. Get the raw pixels of the image
  UInt32 * inputPixels;

  CGImageRef inputCGImage = [inputImage CGImage];
  NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
  NSUInteger inputHeight = CGImageGetHeight(inputCGImage);

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  NSUInteger bytesPerPixel = 4;
  NSUInteger bitsPerComponent = 8;

  NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;

  inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));

  CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
                                               bitsPerComponent, inputBytesPerRow, colorSpace,
                                               kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

  CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);

    NSLog(@"---------data from Object-c version----------");
    UInt32 * currentPixel = inputPixels;
    for (NSUInteger j = 0; j < inputHeight; j++) {
        for (NSUInteger i = 0; i < inputWidth; i++) {
            UInt32 color = *currentPixel;
            NSLog(@"%u", color);
            currentPixel++;
        }
    }
  return inputImage;
}

Available at https://github.com/tuchangwei/Pixel

And if you get the same answer, please run it more times.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
Changwei
  • 672
  • 1
  • 5
  • 16
  • 1
    By the way, these code snippets don’t handle images whose [`scale`](https://developer.apple.com/documentation/uikit/uiimage/1624110-scale) is anything but `1`. You might consider handling scaling once you get the immediate question behind you. – Rob Jun 28 '19 at 06:26

1 Answers1

1

Both your Objective-C and Swift code have leaks. Also your Swift code is not initializing the allocated memory. When I initialized the memory, I didn’t see any differences:

imageData.initialize(repeating: 0, count: width * height)

FWIW, while allocate doesn't initialize the memory buffer, the calloc does:

... The allocated memory is filled with bytes of value zero.

But personally, I’d suggest you get out of the business of allocating memory at all and pass nil for the data parameter and then use bindMemory to access that buffer. If you do that, as the documentation says:

Pass NULL if you want this function to allocate memory for the bitmap. This frees you from managing your own memory, which reduces memory leak issues.

Thus, perhaps:

class func processImage(image: UIImage) {
    guard let cgImage = image.cgImage else {
        return
    }
    let width = cgImage.width
    let height = cgImage.height
    let bytesPerRow = width * 4

    let colorSpace = CGColorSpaceCreateDeviceRGB()

    let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
    guard
        let imageContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
        let rawPointer = imageContext.data
    else {
        return
    }

    let pixelBuffer = rawPointer.bindMemory(to: UInt32.self, capacity: width * height)

    imageContext.draw(cgImage, in: CGRect(origin: .zero, size: CGSize(width: width, height: height)))
    print("---------data from Swift version----------")
    for i in 0..<width * height {
        print(pixelBuffer[i])
    }
}
Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • Hello Rob, thanks for your answer, it means a lot to me. It seems I can't release the `imageData` like imageData.deinitialize(count: width*height) imageData.deallocate() – Changwei Jun 28 '19 at 08:17
  • Sorry, I haven't finished the comment...The reason is that I use `imageData` to create `pixels = UnsafeMutableBufferPointer(start: imageData, count: width * height)`. If I release imageData, then later, if I fetch pixel from `pixels`, I will get a crash. If I use `bindMemory `, I will get a crash too. Could you give me an analysis again? thanks. – Changwei Jun 28 '19 at 08:32
  • Maybe I should release `pixels` after I don't need it like the answer: https://stackoverflow.com/questions/35986292/how-to-dealloc-unsafemutablepointer-referenced-from-swift-struct. Thank you again. – Changwei Jun 28 '19 at 08:42
  • There are a number of ways, but I’d avoid `allocate`/`deallocate` pattern. Perhaps give this method a closure parameter. Perhaps save it in a `Data`. But we’re going beyond the scope of this question; if you’re still unclear, maybe post a separate question. If/when you do, describe the broader problem you’re trying to solve, too. – Rob Jun 28 '19 at 09:04