0

I made an AVFoundation camera to crop square images based on @fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!

Setting up camera:

func setUpCamera() {

      captureSession = AVCaptureSession()
      captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
      let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
      if ((backCamera?.hasFlash) != nil) {
         do { 
            try backCamera.lockForConfiguration()
            backCamera.flashMode = AVCaptureFlashMode.Auto
            backCamera.unlockForConfiguration()
         } catch {
            // error handling
         }
      }
      var error: NSError?
      var input: AVCaptureDeviceInput!
      do {
         input = try AVCaptureDeviceInput(device: backCamera)
      } catch let error1 as NSError {
         error = error1
         input = nil
      }
      if error == nil && captureSession!.canAddInput(input) {
         captureSession!.addInput(input)
         stillImageOutput = AVCaptureStillImageOutput()
         stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
         if captureSession!.canAddOutput(stillImageOutput) {
            captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
            captureSession!.addOutput(stillImageOutput)
            previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
            previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
            previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
            previewVideoView.layer.addSublayer(previewLayer!)
            captureSession!.startRunning()
         }
      }
   }

Snapping photo:

@IBAction func onSnapPhotoButtonPressed(sender: UIButton) {

      if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
         videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in

            if (sampleBuffer != nil) {

               let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
               let dataProvider = CGDataProviderCreateWithCFData(imageData)
               let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)

               let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)   
               self.processImage(image)
               self.clearPhotoButton.hidden = false
               self.nextButton.hidden = false
               self.view.bringSubviewToFront(self.imageView)
            }
         })
      }
   }

Process image to square:

  func processImage(image:UIImage) {

      let deviceScreen = previewLayer?.bounds
      let width:CGFloat = (deviceScreen?.size.width)!
      UIGraphicsBeginImageContext(CGSizeMake(width, width))
      let aspectRatio:CGFloat = image.size.height * width / image.size.width
      image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
      let smallImage = UIGraphicsGetImageFromCurrentImageContext()
      UIGraphicsEndImageContext()
      let cropRect = CGRectMake(0, 0, width, width)
      let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
      imageView.image = UIImage(CGImage: imageRef)
   }

previewLater

processed image

Community
  • 1
  • 1
tahoecoop
  • 378
  • 2
  • 9
  • 30

1 Answers1

2

There are a few things wrong with your processImage() function.

First of all, you're creating a new graphics context with UIGraphicsBeginImageContext().

According to the Apple docs on this function:

This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.

Because the scale factor is 1.0, it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.

You want to be using the UIGraphicsBeginImageContextWithOptions() function, and pass 0.0 for the scale factor. According to the docs on this function, for the scale argument:

If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.

For example:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);

Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.


Second of all, there's a problem with the width you're passing in.

let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))

You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:

let width:CGFloat = image.size.width

You will then have to change the aspectRatio variable to take the screen width, such as:

let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width

Third of all, you can simplify your cropping function significantly.

func processImage(image:UIImage) {

    let screenWidth = UIScreen.mainScreen().bounds.size.width

    let width:CGFloat = image.size.width
    let height:CGFloat = image.size.height

    let aspectRatio = screenWidth/width;

    UIGraphicsBeginImageContextWithOptions(CGSizeMake(screenWidth, screenWidth), false, 0.0) // create context
    let ctx = UIGraphicsGetCurrentContext()

    CGContextTranslateCTM(ctx, 0, (screenWidth-(aspectRatio*height))*0.5) // shift context up, to create a sqaured 'frame' for your image to be drawn in

    image.drawInRect(CGRect(origin:CGPointZero, size: CGSize(width:screenWidth, height:height*aspectRatio))) // draw image

    let img = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    imageView.image = img
}

There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.

Hamish
  • 78,605
  • 19
  • 187
  • 280
  • Thanks for the response @originaluser2! I tried your solution, and the image resolution is sharp, however the image is now very zoomed in. Any ideas on why this happening like this? Thanks again! – tahoecoop Feb 12 '16 at 20:31
  • @tahoecoop yes, I just noticed you're creating the context with the size of the screen, not the size of the image. I have edited my answer. – Hamish Feb 12 '16 at 20:37
  • I understand what you're saying and I tried your edit. Unfortunately it's still zooming in similar to what it was doing before. Am I messing up the sizing when I use ` let cropRect = CGRectMake(0, 0, width, width)`? Thanks again!! – tahoecoop Feb 12 '16 at 20:45
  • @tahoecoop ah yes, actually you should simply change the `width` variable to the image width, and then adjust your `aspectRatio` variable to take the screen width. – Hamish Feb 12 '16 at 20:51
  • @tahoecoop actually, I've just re-worked your function and it should now do what you want it to do. – Hamish Feb 12 '16 at 21:03
  • You are the man! Your `processImage:` does the trick perfectly. Really appreciate the help, this is awesome!! – tahoecoop Feb 12 '16 at 21:16