0

In my custom detector I want to divide image from frame into halves, to process them separately. This is what I got so far in my custom Detector:

val imageArray = frame?.grayscaleImageData?.array()
    val upperImageBuffer = ByteBuffer
                               .allocate(imageArray?.size ?: 0)
                               .put(imageArray, 0, imageArray?.size?.div(2) ?: 0)
val upperFrame = Frame.Builder().
            setImageData(upperImageBuffer, 
                frame?.metadata?.width ?: 0, 
                frame?.metadata?.height?.div(2) ?: 0, 
                frame?.metadata?.format ?: 16).
            setRotation(frame?.metadata?.rotation ?: 0).
            build()

val lowerFrame... etc

val upperDetections = delegateDetector.detect(upperFrame)
upperDetections.forEach { key, barcode -> 
   if (barcode is Barcode) results.append(key, barcode) 
}

val lowerDetections = delegateDetector.detect(lowerFrame) etc.

So far, I am using the same detector on both (this is actually to check if I'm going to recognise more results than in whole frame - as stupid as it sounds, but I leave the question as is, because perhaps someone in the future would need to take one part of image to be processed by one detector, and another one by another).

Still, the problem is: I get the same results for both halves and actually the same as from the original frame. What do I do wrong?

Antek
  • 721
  • 1
  • 4
  • 27

1 Answers1

1

grayscaleImageData from CameraSource also includes color, but is prefixed with the grayscale channel. That is, it formatted as YUV rather than just being the Y channel (grayscale).

So rather than using imageArray.size, use frame.width * frame.height as the size.

pm0733464
  • 2,862
  • 14
  • 16
  • yeah, basically I just found out I was totally wrong about what's going on in my code, so I need to restate the question. Perhaps tomorrow, lots of work right now, unfortunately. Still that was what I ended up using. – Antek Jul 13 '17 at 15:48