13

I am trying to use the new camera api. The burst capture was going too slow, so I use the YUV_420_888 format in the ImageReader and do a JPEG enconding later, as was suggested in the following post:

Android camera2 capture burst is too slow

The problem is that I am getting green images when I try to encode JPEG from YUV_420_888 using RenderScript as follows:

RenderScript rs = RenderScript.create(mContext);
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
Type.Builder yuvType = new Type.Builder(rs, Element.YUV(rs)).setX(width).setY(height).setYuvFormat(ImageFormat.YUV_420_888);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);

Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);

in.copyFrom(data);

yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);

Bitmap bmpout = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
out.copyTo(bmpout);

ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmpout.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] jpegBytes = baos.toByteArray();

data variable (the YUV_420_888 data) is obtained from:

ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] data = new byte[buffer.remaining()];
buffer.get(data);

What am I doing wrong in the JPEG encoding to get the images only in green?

Thanks in advance

Edited: This is an example of the images in green that I obtain:

https://drive.google.com/file/d/0B1yCC7QDeEjdaXF2dVp6NWV6eWs/view?usp=sharing

Community
  • 1
  • 1
Yamidragut
  • 456
  • 4
  • 7
  • 2
    FWIW, a YUV value of 0,0,0 is a medium-green color. So if your image is entirely green, my guess is you're converting a buffer full of zeroes rather than a buffer full of YUV pixel data. – fadden Apr 15 '15 at 16:19
  • I have edited the question with an example of the images I am obtaining. They are not entirely in green, it seems to be in green scale. I think that is because I get the data from only the first plane of the three that YUV format have. I have search a way to get the info from the three planes and pass it to the RenderScript, but I was not able to make the little code I have found work. – Yamidragut Apr 16 '15 at 10:16
  • 1
    HI, did you manage to solve this problem? – Alessandro Roaro Oct 04 '15 at 13:24
  • I tried your codes and the saved png image is green. It seems that ScriptIntrinsicYuvToRGB cannot transform YUV_420_888 into bitmap. Do you find another way to achieve it? – Jun Fang Feb 22 '17 at 02:44

6 Answers6

3

So there are several layers of answers to this question.

First, I don't believe there's a direct way to copy an Image of YUV_420_888 data into an RS Allocation, even if the allocation is of format YUV_420_888.

So if you're not using the Image for anything else than this JPEG encode step, then you can just use an Allocation as the output for the camera directly, by using Allocation#getSurface and Allocation#ioReceive. Then you can perform your YUV->RGB conversion and read out the bitmap.

However, note that JPEG files, under the hood, actually store YUV data, so when you go to compress the JPEG, Bitmap is going to do another RGB->YUV conversion as it saves the file. For maximum efficiency, then, you'd want to feed the YUV data directly to a JPEG encoder that can accept it, and avoid the extra conversion steps entirely. Unfortunately, this isn't possible through the public APIs, so you'd have to drop down to JNI code and include your own copy of libjpeg or an equivalent JPEG encoding library.

If you don't need to save JPEG files terribly quickly, you can swizzle the YUV_420_888 data into a NV21 byte[] and then use YuvImage, though you need to pay attention to the pixel and row strides of your YUV_420_888 data and map it correctly to NV21 - YUV_420_888 is flexible and can represent several different kinds of memory layouts (including NV21) and may be different on different devices. So when modifying the layout to NV21, it's critical to make sure you are doing the mapping correctly.

Eddy Talvala
  • 17,243
  • 2
  • 42
  • 47
  • worse, while YUV_420_888 is declared as one of the formats for RS, as of today, Renderscript [does not support this format](https://android.googlesource.com/platform/frameworks/rs/+/4bcc2cf8f68f80bed9815549f73af4720f9322d2/cpp/Type.cpp#218). – Alex Cohn Nov 27 '17 at 13:53
  • That's not true at the Java level, at least in a quick test. The google demo https://github.com/googlesamples/android-HdrViewfinder which uses YUV_420_888: https://github.com/googlesamples/android-HdrViewfinder/blob/master/Application/src/main/java/com/example/android/hdrviewfinder/ViewfinderProcessor.java#L54 works fine for me on a Google Pixel 2 running Android 8.1. – Eddy Talvala Nov 28 '17 at 22:10
  • Looks like for `USAGE_IO_INPUT` this could work. Furthermore, I won't be surprised if under the hood this format is ignored, and HAL provides the actual NV21 or YV12 instead, see e.g. [rsallocation.c](https://android.googlesource.com/platform/frameworks/compile/libbcc/+/4293770c4700c898f252fbba14aa5f1c33380b41/lib/Renderscript/runtime/rs_allocation.c#287) – Alex Cohn Nov 29 '17 at 11:58
1

Do you have to use RenderScript? If not, you could transform the image from YUV to N21 and then from N21 to JPEG without any fancy structures. First you take the 0 and 2 plane to get N21:

private byte[] convertYUV420ToN21(Image imgYUV420) {
    byte[] rez = new byte[0];

    ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
    ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
    int buffer0_size = buffer0.remaining();
    int buffer2_size = buffer2.remaining();
    rez = new byte[buffer0_size + buffer2_size];

    buffer0.get(rez, 0, buffer0_size);
    buffer2.get(rez, buffer0_size, buffer2_size);

    return rez;
}

Then you can use YuvImage's built in method to compress to JPEG. The w and the h arguments are the width and the height of your image file.

private byte[] convertN21ToJpeg(byte[] bytesN21, int w, int h) {
    byte[] rez = new byte[0];

    YuvImage yuv_image = new YuvImage(bytesN21, ImageFormat.NV21, w, h, null);
    Rect rect = new Rect(0, 0, w, h);
    ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
    yuv_image.compressToJpeg(rect, 100, output_stream);
    rez = output_stream.toByteArray();

    return rez;
}
panonski
  • 555
  • 5
  • 10
  • I don't need to use RenderScript. I tried your code, but the image is not correct. You can see here the result: https://drive.google.com/file/d/0B1yCC7QDeEjdcjlCRGRPaVR1RVk/view?usp=sharing Thank you very much anyway! – Yamidragut May 04 '15 at 14:05
  • There is a part of the image that now looks ok, don't you think? It seems to me like you need to examine how you pass the arguments to these methods (specifically the width and the height). Experiment with them to see if the outcome is different. – panonski May 04 '15 at 14:12
  • 2
    A YUV_420_888 buffer isn't the same as NV21, so you can't just concatenate the 3 planes together and treat it as NV21. You need to look at the row and pixel strides of the UV buffers and potentially read out pixel by pixel; NV21 is semi-planar (interleaved V/U planes) while YUV_420_888 may be either semi-planar or fully planar. You also need to pay attention to row stride -the input to YuvImage is assumed to have no row stride, while the Y, U and V planes of Image may have a row stride > width. – Eddy Talvala Jul 08 '15 at 20:01
1

I managed to get this to work, the answer provided by panonski is not quite right, and a big issue is that this YUV_420_888 format covers many different memory layouts where the NV21 format is very specific (I don't know why the default format was changed in this way, it makes no sense to me)

Note that this method can be pretty slow for a couple of reasons.

  1. Because NV21 interlaces the chroma channels, and YUV_420_888 includes formats that have non-interlaced chroma channels, the only reliable option (that I know of) is to do a byte-by-byte copy. I am interested to know if there is a trick to speed this process up, I suspect there is one. I provide a grayscale-only option because that part is very fast row-by-row copy.

  2. When grabbing frames from the camera, their bytes will be marked as protected which means direct access is impossible and they must be copied to be manipulated directly.

  3. The image appears to be stored in reverse byte order, so after conversion the final array needs to be reversed. This might just be my camera and I suspect there is another trick to be found here that can speed this up a lot.

Anyway here is the code:

private byte[] getRawCopy(ByteBuffer in) {
    ByteBuffer rawCopy = ByteBuffer.allocate(in.capacity());
    rawCopy.put(in);
    return rawCopy.array();
}

private void fastReverse(byte[] array, int offset, int length) {
    int end = offset + length;
    for (int i = offset; i < offset + (length / 2); i++) {
        array[i] = (byte)(array[i] ^ array[end - i  - 1]);
        array[end - i  - 1] = (byte)(array[i] ^ array[end - i  - 1]);
        array[i] = (byte)(array[i] ^ array[end - i  - 1]);
    }
}

private ByteBuffer convertYUV420ToN21(Image imgYUV420, boolean grayscale) {

    Image.Plane yPlane = imgYUV420.getPlanes()[0];
    byte[] yData = getRawCopy(yPlane.getBuffer());

    Image.Plane uPlane = imgYUV420.getPlanes()[1];
    byte[] uData = getRawCopy(uPlane.getBuffer());

    Image.Plane vPlane = imgYUV420.getPlanes()[2];
    byte[] vData = getRawCopy(vPlane.getBuffer());

    // NV21 stores a full frame luma (y) and half frame chroma (u,v), so total size is
    // size(y) + size(y) / 2 + size(y) / 2 = size(y) + size(y) / 2 * 2 = size(y) + size(y) = 2 * size(y)
    int npix = imgYUV420.getWidth() * imgYUV420.getHeight();
    byte[] nv21Image = new byte[npix * 2];
    Arrays.fill(nv21Image, (byte)127); // 127 -> 0 chroma (luma will be overwritten in either case)

    // Copy the Y-plane
    ByteBuffer nv21Buffer = ByteBuffer.wrap(nv21Image);
    for(int i = 0; i < imgYUV420.getHeight(); i++) {
        nv21Buffer.put(yData, i * yPlane.getRowStride(), imgYUV420.getWidth());
    }

    // Copy the u and v planes interlaced
    if(!grayscale) {
        for (int row = 0; row < imgYUV420.getHeight() / 2; row++) {
            for (int cnt = 0, upix = 0, vpix = 0; cnt < imgYUV420.getWidth() / 2; upix += uPlane.getPixelStride(), vpix += vPlane.getPixelStride(), cnt++) {
                nv21Buffer.put(uData[row * uPlane.getRowStride() + upix]);
                nv21Buffer.put(vData[row * vPlane.getRowStride() + vpix]);
            }
        }

        fastReverse(nv21Image, npix, npix);
    }

    fastReverse(nv21Image, 0, npix);

    return nv21Buffer;
}
Max Ehrlich
  • 2,479
  • 1
  • 32
  • 44
  • Unless you are unsatisfied with your own answer, please accept it. Regarding reverse order of pixels, your camera may simply have non-standard orientation, like [Nexus 5X](https://www.theverge.com/2015/11/9/9696774/google-nexus-5x-upside-down-camera). And yes, your conversion to N21 could be improved for such upside-down camera. – Alex Cohn Oct 01 '17 at 10:11
  • @AlexCohn I'd love to accept my answer but since I didn't ask the question, I can't – Max Ehrlich Oct 02 '17 at 16:27
  • 1
    Sorry, my mistake.This is what happens when too many Chrome tabs are open in parallel. – Alex Cohn Oct 02 '17 at 19:54
  • unfortunately this method didn't work for me. @panonski's method worked for me except on some edge devices where I get some green tears. – Sameer J Mar 16 '23 at 01:41
0

If I understood your description correctly I can see at least two problems in your code:

  1. It seems you are only passing the Y part of your image to the YUV->RGB conversion code, because it looks like you are only using the first plane in ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();, ignoring the U and V planes.

  2. I'm not familiar with these Renderscript types yet, but it looks like Element.RGBA_8888 and Bitmap.Config.ARGB_8888 refer to slight different ordering of bytes so you might need to do some reordering work.

Both problems could be the cause of the green color of the resulting picture.

silvaren
  • 730
  • 6
  • 14
  • I I think that it is because of both things. I tried to reordering the alpha bytes, but I get only red images, or pink or yellow instead of green ones. And I suspect that the other reason is that I am using only the first plane, but I do not know how to put all the info of the planes in only one well formed byte array (if I have to put the bytes from one plane after the previous one, or in which order, etc) – Yamidragut May 04 '15 at 14:13
0

This is a answer/question. On several similar post, it's recommended to use this script : https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs

But i don't know how to use it. Advices are welcome.

Widea
  • 129
  • 2
  • 14
-1

Did the above conversion worked? because I tried it using renderscript by copying the first and last plane and I still received a green filtered image like the one from above.

Bogdan Chende
  • 179
  • 1
  • 3
  • 10
  • No, I have put in a comment the link to the image I obtained. It is not entirely green, but the colors weren't the good ones. – Yamidragut May 04 '15 at 14:07