0

I am using cameraX to analyze the image from the Camera.

I get the image in YUV_420_888 format and I managed to transform it to ARGB_8888

I need every pixel to be on 3 bytes with 8 bits of precision, values from 0...255

This is how I create my bitmap.

val bitmap = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)

Is there a way how can I remove the Alpha channel from ARGB_8888?

Marian Pavel
  • 2,726
  • 8
  • 29
  • 65
  • Maybe try converting directly from YUV to RGB with a renderscript like this: https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs – Orcun Jul 08 '21 at 11:46
  • Or there is official YuvToRgbConverter util here: https://github.com/android/camera-samples/blob/3730442b49189f76a1083a98f3acf3f5f09222a3/CameraUtils/lib/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt – Orcun Jul 08 '21 at 11:48
  • @Orcun I already use that converter but I don't need the Alpha. I have a model in Tensor Flow that needs an RGB image, not ARGB – Marian Pavel Jul 08 '21 at 13:06

1 Answers1

0

There are multiple ways to do it and I think doing it with OpenCV would probably be more efficient / faster. Here is Java way:

public byte[] getImagePixels(Bitmap image) {
// calculate how many bytes our image consists of
int bytes = image.getByteCount();

ByteBuffer buffer = ByteBuffer.allocate(bytes); // Create a new buffer
image.copyPixelsToBuffer(buffer); // Move the byte data to the buffer

byte[] temp = buffer.array(); // Get the underlying array containing the data.

byte[] pixels = new byte[(temp.length / 4) * 3]; // Allocate for 3 byte BGR

// Copy pixels into place
for (int i = 0; i < (temp.length / 4); i++) {
    pixels[i * 3] = temp[i * 4 + 2]; // B 
    pixels[i * 3 + 1] = temp[i * 4 + 1]; // G 
    pixels[i * 3 + 2] = temp[i * 4 + 0]; //R

   // Alpha is discarded
}

return pixels;
}
Orcun
  • 650
  • 1
  • 7
  • 16