CoreImage's CIAreaAverage
filter can easily be used to perform whole CIImage
RGB color averaging. For example:
let options = [CIContextOption.workingColorSpace: kCFNull as Any]
let context = CIContext(options: options)
let parameters = [
kCIInputImageKey: inputImage, // assume this exists
kCIInputExtentKey: CIVector(cgRect: inputImage.extent)
]
let filter = CIFilter(name: "CIAreaAverage", parameters: parameters)
var bitmap = [Float32](repeating: 0, count: 4)
context.render(filter.outputImage!, toBitmap: &bitmap, rowBytes: 16, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBAf, colorSpace: nil)
let rAverage = bitmap[0]
let gAverage = bitmap[1]
let bAverage = bitmap[3]
...
- modified from https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
However supposing one does not want whole CIImage
color averaging, breaking up the image into regions of interest (ROIs) by varying the input extent (see kCIInputExtentKey
above), and performing CIAreaAverage
filtering operations per ROI introduces many sequential steps, decreasing performance drastically. The filters cannot be chained, of course, since the output is a 4-component color average (see bitmap
above). Another way of describing this might be "average downsampling".
For example, let's say you have a 1080p image (1920x1080), and you want a 10x10 color average matrix from this. You would be performing 100 CIAreaAverage
operations for 100 different input extents--each corresponding to a 192x108 pixel ROI for which you wish to have R, G, B, and perhaps A, average. But this is now 100 sequential CIAreaAverage
operations--not performant.
Perhaps the next thing one might think to do is some sort of parallel for loop, e.g., a DispatchQueue.concurrentPerform(iterations:, execute:)
per ROI. However, I am not seeing a performance gain. (Note that CIContext
is thread safe, CIFilter
is not)
- https://www.advancedswift.com/parallel-for-loops-in-swift/#parallel-for-loops-using-dispatchqueue
- https://developer.apple.com/documentation/coreimage/cicontext
Logically the next idea might be to create a custom CIFilter--let's call it CIMultiAreaAverage
. However, it's not obvious how to create a CIKernel
that can examine a source pixel's location and map that to a particular destination pixel. You need some buffer of information such as ROI color sum or to treat the destination pixel as a buffer. The simplest thing might be to perform ROI per channel sum into a destination with integer type, and then process that once rendered to a bitmap into an average by casting to float and dividing by the number of pixels in the ROI.
- https://www.raywenderlich.com/25658084-core-image-tutorial-for-ios-custom-filters
- https://developer.apple.com/metal/MetalCIKLReference6.pdf
- https://developer.apple.com/documentation/coreimage/cicolorkernel
I wish I had access to the source code for CIAreaAverage
. To encapsulate the full functionality in the CIFilter
you might have to go further and write what's really a custom Metal shader. So perhaps someone with some expertise can assist with how to accomplish this with a metal shader.
Another option might be to use vDSP/vImage to perform these ROI operations. It seems easy to create the necessary vImage_Buffer
s per ROI, but I'd want to make sure that's an in-place operation (probably) for performance. Then, I'm not sure which or how to apply a vDSP mean function to the vImage_Buffer
, treating it like an array, if that's possible. It sounds like this might be the most performant operation.
- https://stackoverflow.com/a/36805765/6528990
- https://developer.apple.com/documentation/accelerate/applying_vimage_operations_to_regions_of_interest
What does SO think?