2

I'm developing a Face Detection feature with Camera2 and MLKit.

In the Developer Guide, in the Performance Tips part, they say to capture images in ImageFormat.YUV_420_888 format if using the Camera2 API, which is my case.

Then, in the Face Detector part, they recommend to use an image with dimensions of at least 480x360 pixels for faces recognition in real time, which is again my case.

Ok, let's go ! Here is my code, working well

private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {

    // Open the selected camera
    cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)

    val previewSize = if (isPortrait) {
        Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
    } else {
        Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
    }

    // Initialize an image reader which will be used to display a preview
    imageReader = ImageReader.newInstance(
            previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)

    // Retrieve preview's frame and run detector
    imageReader.setOnImageAvailableListener({ reader ->
        lifecycleScope.launch(Dispatchers.Main) {
            val image = reader.acquireNextImage()
            logD { "Image available: ${image.timestamp}" }
            faceDetector.runFaceDetection(image, getRotationCompensation())
            image.close()
        }
    }, imageReaderHandler)

    // Creates list of Surfaces where the camera will output frames
    val targets = listOf(viewfinder.holder.surface, imageReader.surface)

    // Start a capture session using our open camera and list of Surfaces where frames will go
    session = createCaptureSession(cameraDevice, targets, cameraHandler)
    val captureRequest = cameraDevice.createCaptureRequest(
            CameraDevice.TEMPLATE_PREVIEW).apply {
        addTarget(viewfinder.holder.surface)
        addTarget(imageReader.surface)
    }

    // This will keep sending the capture request as frequently as possible until the
    // session is torn down or session.stopRepeating() is called
    session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}

Now, I want to capture a still image...and this is my problem because, ideally, I want:

  • a full resolution image or, as least, bigger than 480x360
  • in JPEG format to be able to save it

The Camera2Basic sample demonstrates how to capture an image (samples for Video and SlowMotion are crashing) and MLKit sample uses the so old Camera API !! Fortunately, I've succeeded is mixing these samples to develop my feature but I'm failed to capture a still image with a different resolution.

I think I have to stop the preview session to recreate one for image capture but I'm not sure...

What I have done is the following but it's capturing images in 480x360:

session.stopRepeating()

 // Unset the image reader listener
 imageReader.setOnImageAvailableListener(null, null)
 // Initialize an new image reader which will be used to capture still photos
 // imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)

 // Start a new image queue
 val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
 imageReader.setOnImageAvailableListener({ reader - >
    val image = reader.acquireNextImage()
    logD {"[Still] Image available in queue: ${image.timestamp}"}
    if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
        imageQueue.take().close()
    }
    imageQueue.add(image)
}, imageReaderHandler)

 // Creates list of Surfaces where the camera will output frames
 val targets = listOf(viewfinder.holder.surface, imageReader.surface)
 val captureRequest = createStillCaptureRequest(cameraDevice, targets)
 session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
    override fun onCaptureCompleted(
        session: CameraCaptureSession,
        request: CaptureRequest,
        result: TotalCaptureResult) {
        super.onCaptureCompleted(session, request, result)
        val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
        logD {"Capture result received: $resultTimestamp"}
        // Set a timeout in case image captured is dropped from the pipeline
        val exc = TimeoutException("Image dequeuing took too long")
        val timeoutRunnable = Runnable {
            continuation.resumeWithException(exc)
        }
        imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
        // Loop in the coroutine's context until an image with matching timestamp comes
        // We need to launch the coroutine context again because the callback is done in
        //  the handler provided to the `capture` method, not in our coroutine context
        @ Suppress("BlockingMethodInNonBlockingContext")
        lifecycleScope.launch(continuation.context) {
            while (true) {
                // Dequeue images while timestamps don't match
                val image = imageQueue.take()
                if (image.timestamp != resultTimestamp)
                  continue
                logD {"Matching image dequeued: ${image.timestamp}"}

                // Unset the image reader listener
                imageReaderHandler.removeCallbacks(timeoutRunnable)
                imageReader.setOnImageAvailableListener(null, null)

                // Clear the queue of images, if there are left
                while (imageQueue.size > 0) {
                    imageQueue.take()
                        .close()
                }
                // Compute EXIF orientation metadata
                val rotation = getRotationCompensation()
                val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
                val exifOrientation = computeExifOrientation(rotation, mirrored)
                logE {"captured image size (w/h): ${image.width} / ${image.height}"}
                // Build the result and resume progress
                continuation.resume(CombinedCaptureResult(
                    image, result, exifOrientation, imageReader.imageFormat))
                // There is no need to break out of the loop, this coroutine will suspend
            }
        }
    }
}, cameraHandler)
}

If I uncomment the new ImageReader instanciation, I have this exception:

java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!

Can anyone help me ?

Ben Weiss
  • 17,182
  • 6
  • 67
  • 87
Bruno
  • 3,872
  • 4
  • 20
  • 37
  • `Camera 2` is a royal pain in the ass. `Camera X` seems promising because of the image analizer as @Martin pointed out. If you want to enjoy your life, please consider using `CameraView` https://github.com/natario1/CameraView you should be up and running with a few lines of code. – Guanaco Devs May 13 '20 at 09:09

2 Answers2

0

This IllegalArgumentException:

java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!

... obviously refers to imageReader.surface.


Meanhile (with CameraX) this works different, see CameraFragment.kt ...

Issue #197: Firebase Face Detection Api issue while using cameraX API;

there might soon be a sample application matching your use case.

Martin Zeitler
  • 1
  • 19
  • 155
  • 216
  • Thanks @Martin, I will check this. But, for operational reasons, I don't want to use CameraX since `PreviewView` is in alpha. And yes, the Exception refers to imageReader...but why ? – Bruno Apr 27 '20 at 08:49
0

ImageReader is sensitive to the choice of format and/or combination of usage flags. The documentation points certain combinations of format may be unsupported. With some Android devices (perhaps some older phone models) you might find the IllegalArgumentException is not thrown using the JPEG format. But it doesn't help much - you want something versatile.

What I have done in the past is to use ImageFormat.YUV_420_888 format (this will be backed by the hardware and ImageReader implementation). This format does not contain pre-optimizations that prevent the application accessing the image via the internal array of planes. I notice you have used it successfully already in your initializeCamera() method.

You may then extract the image data from the frame you want

Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();

and then via a Bitmap create the still image using JPEG compression, PNG, or whichever encoding you choose.

ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);
dr_g
  • 1,179
  • 7
  • 10
  • Thanks for the answer...but how to capture a frame in a different resolution ? It can surely be possible to capture a frame bigger than 480x360 pixels, but I don't know how... – Bruno May 11 '20 at 15:05
  • There is more than one approach to creating a scaled Bitmap from an original Bitmap. Would that be sufficient? Depending on the method, it can sometimes be a bottleneck when performance is key. Or is the question specifically about ImageReader? – dr_g May 11 '20 at 17:55
  • I don't want to upscale the 480x360 image, I want to capture an image bigger and in another format – Bruno May 11 '20 at 19:45
  • Yes surely ImageReader supports larger than 480x360 pixels. But for reasons provided you should use the appropriate format and then convert to the output format you require. – dr_g May 12 '20 at 09:39