I'm detecting faces correctly but I've noticed that the coordinates of the bounding box are subject to micro variations even if the detected face stays completely still. I'm wondering if this is a normal behaviour or if I'm doing something wrong. I'm using two TextureViews to display the camera preview and the face detection overlay.VIDEO
an example of my code:
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
cameraTextureView.setSurfaceTextureListener(null);
try {
if (camera == null) {
camera = Camera.open(CID);
}
//camera parameters init .... code ...code....
camera.setPreviewCallback(new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
final byte[] frame = data;
camera.setFaceDetectionListener(new Camera.FaceDetectionListener() {
@Override
public void onFaceDetection(Camera.Face[] faces, Camera c2) {
if (faces.length > 0) {
Face face = faces[0];
Canvas canvas = cameraOverlay.lockCanvas(null);
if (canvas == null) return;
canvas.drawColor(0, PorterDuff.Mode.CLEAR);
RectF bounds = new RectF(face.rect.left, face.rect.top, face.rect.right, face.rect.bottom);
canvas.drawRect(bounds, faceBoxPaint);
/*START - convert driver coordinates to View coordinates in pixels*/
matrix.setScale(-1, 1); // for front facing camera (matrix.setScale(1, 1); otherwise)
matrix.postRotate(displayOrientation);
// Camera driver coordinates range from (-1000, -1000) to (1000, 1000).
// UI coordinates range from (0, 0) to (width, height).
matrix.postScale(cameraPrevWidthBox / 2000f, cameraPrevHeightBox / 2000f);
matrix.postTranslate(cameraPrevWidthBox / 2f, cameraPrevHeightBox / 2f);
matrix.mapRect(bounds);
//END
cameraOverlay.unlockCanvasAndPost(canvas);
}
}
});
}
});
camera.setPreviewTexture(surface);
camera.startPreview();
camera.startFaceDetection();
} catch (Exception ioe) {
// ~~
}
}
I thought it could've been related to the autofocus or the stabilisation functions but apparently, this is not the case.
I'm running my code on a Samsung S7 with Android 7.0.