0

I'm detecting faces correctly but I've noticed that the coordinates of the bounding box are subject to micro variations even if the detected face stays completely still. I'm wondering if this is a normal behaviour or if I'm doing something wrong. I'm using two TextureViews to display the camera preview and the face detection overlay.VIDEO

an example of my code:

public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {

    cameraTextureView.setSurfaceTextureListener(null);
    try {
        if (camera == null) {
            camera = Camera.open(CID);
        }


        //camera parameters init .... code ...code....

        camera.setPreviewCallback(new Camera.PreviewCallback() {
            public void onPreviewFrame(byte[] data, Camera camera) {
                final byte[] frame = data;           

                camera.setFaceDetectionListener(new Camera.FaceDetectionListener() {
                    @Override
                    public void onFaceDetection(Camera.Face[] faces, Camera c2) {

                        if (faces.length > 0) {
                            Face face = faces[0];

                            Canvas canvas = cameraOverlay.lockCanvas(null);
                            if (canvas == null) return;

                            canvas.drawColor(0, PorterDuff.Mode.CLEAR);

                            RectF bounds = new RectF(face.rect.left, face.rect.top, face.rect.right, face.rect.bottom);
                            canvas.drawRect(bounds, faceBoxPaint);

                                    /*START - convert driver coordinates to View coordinates in pixels*/
                            matrix.setScale(-1, 1); // for front facing camera (matrix.setScale(1, 1); otherwise)
                            matrix.postRotate(displayOrientation);
                            // Camera driver coordinates range from (-1000, -1000) to (1000, 1000).
                            // UI coordinates range from (0, 0) to (width, height).
                            matrix.postScale(cameraPrevWidthBox / 2000f, cameraPrevHeightBox / 2000f);
                            matrix.postTranslate(cameraPrevWidthBox / 2f, cameraPrevHeightBox / 2f);
                            matrix.mapRect(bounds);
                                    //END 

                            cameraOverlay.unlockCanvasAndPost(canvas);
                        }

                    }
                });
            }
        });
        camera.setPreviewTexture(surface);

        camera.startPreview();
        camera.startFaceDetection();

    } catch (Exception ioe) {
        // ~~
    }
}

I thought it could've been related to the autofocus or the stabilisation functions but apparently, this is not the case.

I'm running my code on a Samsung S7 with Android 7.0.

Izzy88
  • 93
  • 9
  • This is normal. Even if the face is perfectly still, changes in light levels or background could cause slight changes in where the algorithms think the face is. You could possibly put an algorithm on top to smooth those variations. Some variation on a Kalman filter perhaps. – Gabe Sechan Jun 01 '17 at 18:51
  • I second @GabeSechan. Also note that, in general, the smaller the camera's sensor the more sensitive it will be to light fluctuations and the more light it will need to create an crisp, accurate image. Have you tried running your face detection outside in bright sunlight? The extra light [without the potential 120hz flicker](https://physics.stackexchange.com/questions/13400/what-invisible-flicker-do-different-types-of-light-bulbs-have) might make for a more stable solution. But then again, it might not. – ashbygeek Jun 01 '17 at 19:03
  • Ok, thank you guys. I'll try with a filter. – Izzy88 Jun 04 '17 at 19:06

0 Answers0