0

I am using the following code to create the ARGB8888 image. Is this the correct Image format for ARBG8888 or should I use a different format. This link is used to create the below Image FOrmat


guard let imageConversionToCGImage = img.cgImage,
              let imageFormat = vImage_CGImageFormat(
                bitsPerComponent: 8,
                bitsPerPixel: 32,
                colorSpace: CGColorSpaceCreateDeviceRGB(),
                bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.first.rawValue),
                renderingIntent: .defaultIntent),    
                //Source buffer
                let sourceBuffer = try? vImage_Buffer(cgImage: imageConversionToCGImage, format: imageFormat, flags: .noFlags),
              //ARGBImage
                let argb8888CGImage = try? sourceBuffer.createCGImage(format: imageFormat) else { return } ```

and I am getting the image data using the DataProvider.CFData for getting the data . Is this the right format or am I doing it wrong. What should I do to make it so I get the ARGB data.

DURGA_PRASAD
  • 1,239
  • 1
  • 8
  • 7
  • It looks right to me, though be aware of the BGRA encoding which is "alpha first" but with a 32-bit little endian endianness tagged onto it. I would suggest using the colorspace attached to the CGImage to avoid too much expense with color matching. Whether this is actually the right thing to do depends on what you are going to do with the image. If you plan to composite it into some other drawing surface, then it should match the colorspace of the target buffer / context. Now is a great time to convert it in that case. – Ian Ollmann Dec 06 '22 at 08:39

1 Answers1

1

When you pass a populated vImage_CGImageFormat to that initializer, vImage will attempt to convert the source image to the specified format. For example, you could pass a grayscale image format such as:

let imageFormat = vImage_CGImageFormat(
    bitsPerComponent: 8,
    bitsPerPixel: 8,
    colorSpace: CGColorSpaceCreateDeviceGray(),
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
    renderingIntent: .defaultIntent)

And the returned buffer will contain a grayscale representation of the source image regardless of the source image's format.

If you want to populate a buffer based on the format of the source image, pass an empty vImage_CGImageFormat and use the vImageBuffer_InitWithCGImage function:

var imageFormat = vImage_CGImageFormat()
var sourceBuffer = vImage_Buffer()
vImageBuffer_InitWithCGImage(&sourceBuffer,
                             &imageFormat,
                             nil,
                             cgImage,
                             vImage_Flags(kvImageNoFlags))

On return, imageFormat contains the color space and bit depth of the source image, and sourceBuffer contains the image itself.

Apple have just released the vImage.PixelBuffer that provides a Swift friendly API to vImage. Take a look at init(cgImage:cgImageFormat:pixelFormat:).

Finally, init(data:width:height:byteCountPerRow:pixelFormat:) shows an example of creating a vImage pixel buffer from a Core Graphics image's underlying data.

Flex Monkey
  • 3,583
  • 17
  • 19