I've been using the Vision API from Google Play Services in my app, and all works fine in my devices (Nexus 7 2012 with Android 5.1, and a cheaper tablet with android 4.2.2), but not in some of the production devices.
We have the vision api with the face detection framework running all the time our app is open, so the app changes content when the face detection detects him.
The problem is when we show some video with a VideoView
at the same time that the face detection is working, because we have noticed that some kind of "ghost" appears on top of the VideoView
, and we have seen that this "ghost" it's the preview that the face detection is getting in realtime.
It's complicated to explain, so we have recorded a video that illustrates the problem better: Video
At this time, I have tried this stuff:
Changing the dimensions of
.setRequestedPreviewSize(int, int)
, and we have seen that based on this dimensions, the "ghost" changes it's dimension too, so we realized that the preview size it's the one that was causing the problem.Removing the call of
.setRequestedPreviewSize(int, int)
from theCameraSource.Builder
we have seen that internally, it defaults it's dimensions to 1024x768, as you can see onCameraSource
so the "ghost" fills the entire screen with this.Trying another framework to play the video, removing
VideoView
and using another based inTextureView
doesn't helps too, the ghost it's still showing.Using different video formats doesn't helps too
I think this can be some kind of problem when more than SurfaceView or SurfaceTexture works at the same time, one on top of another, but this is the first time I have been working with a multimedia oriented app.
Somebody has some idea of what can be the problem?
Thanks in advance.
EDIT
Just to clarify Im posting the code Im using.
This is the method I am using in the app is being showed in the video:
private void setupFaceDetector() {
Log.d(TAG, "setupFaceDetector");
faceDetector = new FaceDetector.Builder(this)
.setProminentFaceOnly(true)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.build();
if (!faceDetector.isOperational()) {
retryIn(1000);
} else {
faceDetector.setProcessor(new LargestFaceFocusingProcessor(faceDetector, new FaceTracker(this)));
if (BuildConfig.FLAVOR.equals("withPreview")) {
mCameraSource = new CameraSource.Builder(this, faceDetector)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedPreviewSize(320, 240)
.build();
} else {
mCameraSource = new CameraSource.Builder(this, faceDetector)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.build();
}
}
}
Im using a flavor to play with different kind of things, this project was only to be able to make the test of this feature easier.
When the onResume()
method is called, Im loading the video from File
and starting the CameraSource
instance
private void initializeVideo() {
mVideoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
@Override
public void onPrepared(MediaPlayer mp) {
mp.start();
}
});
mVideoView.setOnErrorListener(new MediaPlayer.OnErrorListener() {
@Override
public boolean onError(MediaPlayer mp, int what, int extra) {
Log.d(TAG, "Error playing the video");
return false;
}
});
mVideoView.setOnCompletionListener(new MediaPlayer.OnCompletionListener() {
@Override
public void onCompletion(MediaPlayer mp) {
playVideo();
}
});
}
private void startCameraSource() {
try {
mCameraSource.start();
} catch (IOException e) {
e.printStackTrace();
}
}
Just to clarify:
We are using FaceTracker just to detect faces, and using it's
public void onNewItem(int id, Face face)
andpublic void onMissing(Detector.Detections<Face> detections)
The xml layout to show the
VideoView
is:<VideoView android:id="@+id/videoView" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="3"/> <ScrollView android:id="@+id/scroll" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1"> <TextView android:id="@+id/tv_log" android:layout_width="match_parent" android:layout_height="wrap_content" android:textColor="@android:color/white"/> </ScrollView>