In my custom detector I want to divide image from frame into halves, to process them separately.
This is what I got so far in my custom Detector
:
val imageArray = frame?.grayscaleImageData?.array()
val upperImageBuffer = ByteBuffer
.allocate(imageArray?.size ?: 0)
.put(imageArray, 0, imageArray?.size?.div(2) ?: 0)
val upperFrame = Frame.Builder().
setImageData(upperImageBuffer,
frame?.metadata?.width ?: 0,
frame?.metadata?.height?.div(2) ?: 0,
frame?.metadata?.format ?: 16).
setRotation(frame?.metadata?.rotation ?: 0).
build()
val lowerFrame... etc
val upperDetections = delegateDetector.detect(upperFrame)
upperDetections.forEach { key, barcode ->
if (barcode is Barcode) results.append(key, barcode)
}
val lowerDetections = delegateDetector.detect(lowerFrame) etc.
So far, I am using the same detector on both (this is actually to check if I'm going to recognise more results than in whole frame - as stupid as it sounds, but I leave the question as is, because perhaps someone in the future would need to take one part of image to be processed by one detector, and another one by another).
Still, the problem is: I get the same results for both halves and actually the same as from the original frame. What do I do wrong?