6

I believe I found a bug in the Firebase MLKit for Android, but perhaps other people have some insight into it.

I am currently streaming video from a drone to my Android device. It is being decoded to YUV420-888 with MediaCodecand written to an InputReader surface. When I pass the Image received in the InputReader callback to MLKit using fromMediaImage, there are no errors but it fails to detect anything in the image. If I convert the Image to a Bitmap using PixelCopy and then pass that into MLKit with fromBitmap, it then succeeds at detecting features in the image.

I think the problem may be that the video is not originating from the phone's camera, but rather from an external camera on the drone (therefore not using Camera2 APIs). I know my video feed is working in general because it can be previewed on a SurfaceView and because it works after being turned into a Bitmap. This leads me to believe that the MLKit parsing of YUV420-888 data must be incorrect and that is what leads to the issue.

Anyone have any insight?

hellowill89
  • 1,538
  • 2
  • 15
  • 26
  • Have you tried taking a look at [this SO question](https://stackoverflow.com/questions/51144854/how-to-use-image-format-yuv-420-888-for-mlkit-of-google) or [this GitHub issue](https://github.com/flutter/flutter/issues/26348) ? Both solutions seem to involve a format conversion or editing of the metadata – Bryen Mar 15 '20 at 16:47
  • Have you found a way to use a USB camera with android and ML-kit with CameraX? – Sejpalsinh Jadeja Nov 15 '21 at 16:41

0 Answers0