I am implementing the MLKit face detection library with a simple application. The application is a facial monitoring system so i am setting up a preview feed from the front camera and attempting to detect a face. I am using camera2Api. At my ImageReader.onImageAvailableListener, I want to implement the firebase face detection on each read in the image. After creating my FirebaseVisionImage and running the FirebaseVisionFaceDetector I am getting an empty faces list, this should contain detected faces but I always get a face of size 0 even though a face is in the image.
I have tried other forms of creating my FirebaseVisionImage. Currently, I am creating it through the use of a byteArray which I created following the MlKit docs. I have also tried to create a FirebaseVisionImage using the media Image object.
private final ImageReader.OnImageAvailableListener onPreviewImageAvailableListener = new ImageReader.OnImageAvailableListener() {
/**Get Image convert to Byte Array **/
@Override
public void onImageAvailable(ImageReader reader) {
//Get latest image
Image mImage = reader.acquireNextImage();
if(mImage == null){
return;
}
else {
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseApp.initializeApp(MonitoringFeedActivity.this);
FirebaseVisionFaceDetectorOptions highAccuracyOpts =
new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.build();
int rotation = getRotationCompensation(frontCameraId,MonitoringFeedActivity.this, getApplicationContext() );
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
.setWidth(480) // 480x360 is typically sufficient for
.setHeight(360) // image recognition
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotation)
.build();
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
.getVisionFaceDetector(highAccuracyOpts);
Task<List<FirebaseVisionFace>> result =
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionFace>>() {
@Override
public void onSuccess(List<FirebaseVisionFace> faces) {
// Task completed successfully
if (faces.size() != 0) {
Log.i(TAG, String.valueOf(faces.get(0).getSmilingProbability()));
}
}
})
.addOnFailureListener(
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
// ...
}
});
mImage.close();
The aim is to have the resulting faces list contain the detected faces in each processed image.