The CILanczosScaleTransform
has two parameters:
scale
: The scaling factor to use on the image
aspectRatio
: The additional horizontal scaling factor to use on the image
Using these two parameters, you can achieve a target image size. Compute the scaling factor for the vertical dimension to get the desired height, then compute the result of this scaling applied to the horizontal dimension. This may not match the target width, so compute the aspect ratio to apply to the scaled width to correct it to the desired target width.
import CoreImage
let context = CIContext()
let imageURL = URL(fileURLWithPath: "sample.jpg")
let sourceImage = CIImage(contentsOf: imageURL, options: nil)
let resizeFilter = CIFilter(name:"CILanczosScaleTransform")!
// Desired output size
let targetSize = NSSize(width:190, height:230)
// Compute scale and corrective aspect ratio
let scale = targetSize.height / (sourceImage?.extent.height)!
let aspectRatio = targetSize.width/((sourceImage?.extent.width)! * scale)
// Apply resizing
resizeFilter.setValue(sourceImage, forKey: kCIInputImageKey)
resizeFilter.setValue(scale, forKey: kCIInputScaleKey)
resizeFilter.setValue(aspectRatio, forKey: kCIInputAspectRatioKey)
let outputImage = resizeFilter.outputImage
My sample image had dimensions (w 2,048 h 1,536)
. The computed scaling factor was 0.1497395833333333 and aspect ratio 0.6195652173913043 giving the target output dimensions of (w 190 h 230)
.