0

Update #6 Discovered I was accessing RGB values improperly. I assumed I was accessing data from an Int[], but was instead accessing byte information from a Byte[]. Changed to accessing from Int[] and get the following image:

enter image description here


Update #5 Adding code used to get RGBA ByteBuffer for reference

 private void screenScrape() {

    Log.d(TAG, "In screenScrape");

    //read pixels from frame buffer into PBO (GL_PIXEL_PACK_BUFFER)
    mSurface.queueEvent(new Runnable() {
        @Override
        public void run() {
            Log.d(TAG, "In Screen Scrape 1");
            //generate and bind buffer ID
            GLES30.glGenBuffers(1, pboIds);
            checkGlError("Gen Buffers");
            GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pboIds.get(0));
            checkGlError("Bind Buffers");

            //creates and initializes data store for PBO.  Any pre-existing data store is deleted
            GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, (mWidth * mHeight * 4), null, GLES30.GL_STATIC_READ);
            checkGlError("Buffer Data");

            //glReadPixelsPBO(0,0,w,h,GLES30.GL_RGB,GLES30.GL_UNSIGNED_SHORT_5_6_5,0);
            glReadPixelsPBO(0, 0, mWidth, mHeight, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, 0);

            checkGlError("Read Pixels");
            //GLES30.glReadPixels(0,0,w,h,GLES30.GL_RGBA,GLES30.GL_UNSIGNED_BYTE,intBuffer);
        }
    });

    //map PBO data into client address space
    mSurface.queueEvent(new Runnable() {
        @Override
        public void run() {
            Log.d(TAG, "In Screen Scrape 2");

            //read pixels from PBO into a byte buffer for processing.  Unmap buffer for use in next pass
            mapBuffer = ((ByteBuffer) GLES30.glMapBufferRange(GLES30.GL_PIXEL_PACK_BUFFER, 0, 4 * mWidth * mHeight, GLES30.GL_MAP_READ_BIT)).order(ByteOrder.nativeOrder());
            checkGlError("Map Buffer");

            GLES30.glUnmapBuffer(GLES30.GL_PIXEL_PACK_BUFFER);
            checkGlError("Unmap Buffer");

            isByteBufferEmpty(mapBuffer, "MAP BUFFER");
            convertColorSpaceByteArray(mapBuffer);
            mapBuffer.clear();
        }
    });
}

Update #4 For reference, here is the original image to compare against.

enter image description here


Update #3 This is the output image after interleaving all U/V data into a single array and passing it to the Image object at inputImagePlanes[1]; inputImagePlanes[2]; is unused; enter image description here

The next image is the same interleaved UV data, but we load this into inputImagePlanes[2]; instead of inputImagePlanes[1]; enter image description here


Update #2 This is the output image after padding the U/V buffers with a zero in between each byte of 'real' data. uArray[uvByteIndex] = (byte) 0;

enter image description here


Update #1 As suggested by a comment, here are the row and pixel strides I get from calling getPixelStride and getRowStride

Y Plane Pixel Stride = 1, Row Stride = 960
U Plane Pixel Stride = 2, Row Stride = 960
V Plane Pixel Stride = 2, Row Stride = 960

The goal of my application is to read pixels out from the screen, compress them, and then send that h264 stream over WiFi to be played be a receiver.

Currently I'm using the MediaMuxer class to convert the raw h264 stream to an MP4, and then save it to file. However the end result video is messed up and I can't figure out why. Lets walk through some of processing and see if we can find anything that jumps out.

Step 1 Set up the encoder. I'm currently taking screen images once every 2 seconds, and using "video/avc" for MIME_TYPE

        //create codec for compression
        try {
            mCodec = MediaCodec.createEncoderByType(MIME_TYPE);
        } catch (IOException e) {
            Log.d(TAG, "FAILED: Initializing Media Codec");
        }

        //set up format for codec
        MediaFormat mFormat = MediaFormat.createVideoFormat(MIME_TYPE, mWidth, mHeight);

        mFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible);
        mFormat.setInteger(MediaFormat.KEY_BIT_RATE, 16000000);
        mFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 1/2);
        mFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);

Step 2 Read pixels out from screen. This is done using openGL ES, and the pixels are read out in RGBA format. (I've confirmed this part to be working)

Step 3 Convert the RGBA pixels to YUV420 (IYUV) format. This is done using the following method. Note that I have 2 methods for encoding called at the end of this method.

 private void convertColorSpaceByteArray(ByteBuffer rgbBuffer) {

    long startTime = System.currentTimeMillis();

    Log.d(TAG, "In convertColorspace");
    final int frameSize = mWidth * mHeight;
    final int chromaSize = frameSize / 4;

    byte[] rgbByteArray = new byte[rgbBuffer.remaining()];
    rgbBuffer.get(rgbByteArray);

    byte[] yuvByteArray = new byte[inputBufferSize];
    Log.d(TAG, "Input Buffer size = " + inputBufferSize);

    byte[] yArray = new byte[frameSize];
    byte[] uArray = new byte[(frameSize / 4)];
    byte[] vArray = new byte[(frameSize / 4)];

    isByteBufferEmpty(rgbBuffer, "RGB BUFFER");

    int yIndex = 0;
    int uIndex = frameSize;
    int vIndex = frameSize + chromaSize;

    int yByteIndex = 0;
    int uvByteIndex = 0;

    int R, G, B, Y, U, V;
    int index = 0;

    //this loop controls the rows
    for (int i = 0; i < mHeight; i++) {
        //this loop controls the columns
        for (int j = 0; j < mWidth; j++) {

            R = (rgbByteArray[index] & 0xff0000) >> 16;
            G = (rgbByteArray[index] & 0xff00) >> 8;
            B = (rgbByteArray[index] & 0xff);

            Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
            U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
            V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

            //clamp and load in the Y data
            yuvByteArray[yIndex++] = (byte) ((Y < 16) ? 16 : ((Y > 235) ? 235 : Y));
            yArray[yByteIndex] = (byte) ((Y < 16) ? 16 : ((Y > 235) ? 235 : Y));
            yByteIndex++;

            if (i % 2 == 0 && index % 2 == 0) {
                //clamp and load in the U & V data
                yuvByteArray[uIndex++] = (byte) ((U < 16) ? 16 : ((U > 239) ? 239 : U));
                yuvByteArray[vIndex++] = (byte) ((V < 16) ? 16 : ((V > 239) ? 239 : V));

                uArray[uvByteIndex] = (byte) ((U < 16) ? 16 : ((U > 239) ? 239 : U));
                vArray[uvByteIndex] = (byte) ((V < 16) ? 16 : ((V > 239) ? 239 : V));

                uvByteIndex++;
            }
            index++;
        }
    }
    encodeVideoFromImage(yArray, uArray, vArray);
    encodeVideoFromBuffer(yuvByteArray);
}

Step 4 Encode the data! I currently have two different ways of doing this, and each has a different output. One uses a ByteBuffer returned from MediaCodec.getInputBuffer();, the other uses an Image returned from MediaCodec.getInputImage();

Encoding using ByteBuffer

 private void encodeVideoFromBuffer(byte[] yuvData) {

    Log.d(TAG, "In encodeVideo");
    int inputSize = 0;

    //create index for input buffer
    inputBufferIndex = mCodec.dequeueInputBuffer(0);
    //create the input buffer for submission to encoder
    ByteBuffer inputBuffer = mCodec.getInputBuffer(inputBufferIndex);


    //clear, then copy yuv buffer into the input buffer
    inputBuffer.clear();
    inputBuffer.put(yuvData);

    //flip buffer before reading data out of it
    inputBuffer.flip();

    mCodec.queueInputBuffer(inputBufferIndex, 0, inputBuffer.remaining(), presentationTime, 0);

    presentationTime += MICROSECONDS_BETWEEN_FRAMES;

    sendToWifi();
}

And the associated output image (note: I took a screenshot of the MP4) enter image description here

Encoding using Image

 private void encodeVideoFromImage(byte[] yToEncode, byte[] uToEncode, byte[]vToEncode) {

    Log.d(TAG, "In encodeVideo");
    int inputSize = 0;

    //create index for input buffer
    inputBufferIndex = mCodec.dequeueInputBuffer(0);
    //create the input buffer for submission to encoder
    Image inputImage = mCodec.getInputImage(inputBufferIndex);
    Image.Plane[] inputImagePlanes = inputImage.getPlanes();

    ByteBuffer yPlaneBuffer = inputImagePlanes[0].getBuffer();
    ByteBuffer uPlaneBuffer = inputImagePlanes[1].getBuffer();
    ByteBuffer vPlaneBuffer = inputImagePlanes[2].getBuffer();

    yPlaneBuffer.put(yToEncode);
    uPlaneBuffer.put(uToEncode);
    vPlaneBuffer.put(vToEncode);

    yPlaneBuffer.flip();
    uPlaneBuffer.flip();
    vPlaneBuffer.flip();

    mCodec.queueInputBuffer(inputBufferIndex, 0, inputBufferSize, presentationTime, 0);

    presentationTime += MICROSECONDS_BETWEEN_FRAMES;

    sendToWifi();
}

And the associated output image (note: I took a screenshot of the MP4) enter image description here

Step 5 Convert H264 Stream to MP4. Finally I grab the output buffer from the codec, and use MediaMuxer to convert the raw h264 stream to an MP4 that I can play and test for correctness

 private void sendToWifi() {
    Log.d(TAG, "In sendToWifi");

    MediaCodec.BufferInfo mBufferInfo = new MediaCodec.BufferInfo();

    //Check to see if encoder has output before proceeding
    boolean waitingForOutput = true;
    boolean outputHasChanged = false;
    int outputBufferIndex = 0;

    while (waitingForOutput) {
        //access the output buffer from the codec
        outputBufferIndex = mCodec.dequeueOutputBuffer(mBufferInfo, -1);

        if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
            outputFormat = mCodec.getOutputFormat();
            outputHasChanged = true;
            Log.d(TAG, "OUTPUT FORMAT HAS CHANGED");
        }

        if (outputBufferIndex >= 0) {
            waitingForOutput = false;
        }
    }

    //this buffer now contains the compressed YUV data, ready to be sent over WiFi
    ByteBuffer outputBuffer = mCodec.getOutputBuffer(outputBufferIndex);

    //adjust output buffer position and limit.  As of API 19, this is not automatic
    if(mBufferInfo.size != 0) {
        outputBuffer.position(mBufferInfo.offset);
        outputBuffer.limit(mBufferInfo.offset + mBufferInfo.size);
    }


    ////////////////////////////////FOR DEGBUG/////////////////////////////
    if (muxerNotStarted && outputHasChanged) {
        //set up track
        mTrackIndex = mMuxer.addTrack(outputFormat);

        mMuxer.start();
        muxerNotStarted = false;
    }

    if (!muxerNotStarted) {
        mMuxer.writeSampleData(mTrackIndex, outputBuffer, mBufferInfo);
    }
    ////////////////////////////END DEBUG//////////////////////////////////

    //release the buffer
    mCodec.releaseOutputBuffer(outputBufferIndex, false);
    muxerPasses++;
}

If you've made it this far you're a gentleman (or lady!) and a scholar! Basically I'm stumped as to why my image is not coming out properly. I'm relatively new to video processing so I'm sure I'm just missing something.

Neil Ruggiero
  • 330
  • 3
  • 4
  • 11
  • 1
    Definitely looks like a plane problem. Please dump the pixel/row stride for each `Image.Plane` and add to your question. Might as well stick with `encodeVideoFromImage()`, b/c you're using the wrong `COLOR_Format` for `encodeVideoFromBuffer()`. Also, your `KEY_FRAME_RATE` parameter looks like a zero (1/2)! – greeble31 Feb 05 '19 at 15:04
  • Good catch! I thought that the 1/2 would convert to a float and as such would match the 1 frame every 2 seconds. However you're saying that this will get converted to a 0 instead of remaining a float, correct? – Neil Ruggiero Feb 05 '19 at 15:18
  • 1
    Right. It's integer division; the cast to `float` isn't performed until after the division. And it won't cast to `float` if you're calling `setInteger()`! Use `1.0f/2` instead. You see the problem with your strides? – greeble31 Feb 05 '19 at 15:33
  • Thanks for the info! I don't see the issue with my strides however. I'm really new to all this video processing so pardon my lack of knowledge. Would you be able to shed any light on the subject? – Neil Ruggiero Feb 05 '19 at 15:36
  • 1
    The stride arguments imply that the U/V data is supposed to be interleaved, not separate planes. I'm trying to understand exactly what the system expects, but I can't find a good code sample online. What is the value of `uPlaneBuffer.getClass().toString()`? – greeble31 Feb 05 '19 at 15:51
  • Interesting that it expects interleaved UV data, but the returned image also has a separate plane for each of the U and V data. At any rate, the output from `uPlaneBuffer.getClass().toString()' is java.nio.DirectByteBuffer. This was true for each of the Y, U and V planes – Neil Ruggiero Feb 05 '19 at 16:03
  • 1
    Yeah, they're doing fancy things with direct `ByteBuffers`. What I was afraid of. You're gonna have to do some experiments. Try this first: make `uArray` and `vArray` twice as big; leave an empty "spacer" byte in-between each byte, and let's see what kind of image that produces. – greeble31 Feb 05 '19 at 16:09
  • Just made your suggest change (Update #2) and got the output image posted in the original. We no longer see a horizontal split across the screen, hopefully a good sign! – Neil Ruggiero Feb 05 '19 at 16:29
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/187943/discussion-between-greeble31-and-neil-ruggiero). – greeble31 Feb 05 '19 at 16:35

1 Answers1

2

If you're API 19+, might as well stick with encoding method #2, getImage()/encodeVideoFromImage(), since that is more modern.

Focusing on that method: One problem was, you had an unexpected image format. With COLOR_FormatYUV420Flexible, you know you're going to have 8-bit U and V components, but you won't know in advance where they go. That's why you have to query the Image.Plane formats. Could be different on every device.

In this case, the UV format turned out to be interleaved (very common on Android devices). If you're using Java, and you supply each array (U/V) separately, with the "stride" requested ("spacer" byte in-between each sample), I believe one array ends up clobbering the other, because these are actually "direct" ByteBuffers, and they were intended to be used from native code, like in this answer. The solution I explained was to copy an interleaved array into the third (V) plane, and ignore the U plane. On the native side, these two planes actually overlap each other in memory (except for the first and last byte), so filling one causes the implementation to fill both.

If you use the second (U) plane instead, you'll find things work, but the colors look funny. That's also because of the overlapping arrangement of these two planes; what that does, effectively, is shift every array element by one byte (which puts U's where V's should be, and vice versa.)

...In other words, this solution is actually a bit of a hack. Probably the only way to do this correctly, and have it work on all devices, is to use native code (as in the answer I linked above).

Once the color plane problem is fixed, that leaves all the funny overlapping text and vertical striations. These were actually caused by your interpretation of the RGB data, which had the wrong stride.

And, once that is fixed, you have a decent-looking picture. It's been mirrored vertically; I don't know the root cause of that, but I suspect it's an OpenGL issue.

greeble31
  • 4,894
  • 2
  • 16
  • 30