2

I need to pass the FFMPEG 'raw' data back to my JAVA code in order to display it on the screen. I have a native method that deals with FFMPEG and after that calls a method in java that takes Byte[] (so far) as an argument.

Byte Array that is passed is read by JAVA but when doing BitmapFactory.decodeByteArray(bitmap, 0, bitmap.length); it returns null. I have printed out the array and I get 200k of elements (which are expected), but cannot be decoded. So far what I'm doing is taking data from AvFrame->data casting it to unsigned char * and then casting that to jbyterArray. After all the casting, I pass the jbyteArray as argument to my JAVA method. Is there something I'm missing here? Why won't BitmapFactory decode the array into an image for displaying?

EDIT 1.0

Currently I am trying to obtain my image via

public void setImage(ByteBuffer bmp) {
        bmp.rewind();
        Bitmap bitmap = Bitmap.createBitmap(1920, 1080, Bitmap.Config.ARGB_8888);
        bitmap.copyPixelsFromBuffer(bmp);

        runOnUiThread(() -> {
            ImageView imgViewer = findViewById(R.id.mSurfaceView);
            imgViewer.setImageBitmap(bitmap);

        });
    }

But I keep getting an exception

JNI DETECTED ERROR IN APPLICATION: JNI NewDirectByteBuffer called with pending exception java.lang.RuntimeException: Buffer not large enough for pixels
at void android.graphics.Bitmap.copyPixelsFromBuffer(java.nio.Buffer) (Bitmap.java:657)
at void com.example.asmcpp.MainActivity.setSurfaceImage(java.nio.ByteBuffer) 

Edit 1.1

So, here is the full code that is executing every time there is a frame incoming. Note that the ByteBuffer is created and passed from within this method


void VideoClientInterface::onEncodedFrame(video::encoded_frame_t &encodedFrame) {
    AVFrame *filt_frame = av_frame_alloc();
    auto frame = std::shared_ptr<video::encoded_frame_t>(new video::encoded_frame_t,
                                                         [](video::encoded_frame_t *p) { if (p) delete p; });
    if (frame) {
        frame->size = encodedFrame.size;
        frame->ssrc = encodedFrame.ssrc;
        frame->width = encodedFrame.width;
        frame->height = encodedFrame.height;
        frame->dataType = encodedFrame.dataType;
        frame->timestamp = encodedFrame.timestamp;
        frame->frameIndex = encodedFrame.frameIndex;
        frame->isKeyFrame = encodedFrame.isKeyFrame;
        frame->isDroppable = encodedFrame.isDroppable;

        frame->data = new char[frame->size];
        if (frame->data) {
            memcpy(frame->data, encodedFrame.data, frame->size);
            AVPacket packet;
            av_init_packet(&packet);

            packet.dts = AV_NOPTS_VALUE;
            packet.pts = encodedFrame.timestamp;

            packet.data = (uint8_t *) encodedFrame.data;
            packet.size = encodedFrame.size;

            int ret = avcodec_send_packet(m_avCodecContext, &packet);
            if (ret == 0) {
                ret = avcodec_receive_frame(m_avCodecContext, m_avFrame);
                if (ret == 0) {
                    m_transform = sws_getCachedContext(
                            m_transform, // previous context ptr
                            m_avFrame->width, m_avFrame->height, AV_PIX_FMT_YUV420P, // src
                            m_avFrame->width, m_avFrame->height, AV_PIX_FMT_RGB24, // dst
                            SWS_BILINEAR, nullptr, nullptr, nullptr // options
                    );

                    auto decodedFrame = std::make_shared<video::decoded_frame_t>();
                    decodedFrame->width = m_avFrame->width;
                    decodedFrame->height = m_avFrame->height;
                    decodedFrame->size = m_avFrame->width * m_avFrame->height * 3;
                    decodedFrame->timeStamp = m_avFrame->pts;

                    decodedFrame->data = new unsigned char[decodedFrame->size];

                    if (decodedFrame->data) {
                        uint8_t *dstSlice[] = {decodedFrame->data,
                                               0,
                                               0};// outFrame.bits(), outFrame.bits(), outFrame.bits()

                        const int dstStride[] = {decodedFrame->width * 3, 0, 0};
                        sws_scale(m_transform, m_avFrame->data, m_avFrame->linesize,
                                                  0, m_avFrame->height, dstSlice, dstStride);

                        auto m_rawData = decodedFrame->data;
                        auto len = strlen(reinterpret_cast<char *>(m_rawData));
                        if (frameCounter == 10) {
                            jobject newArray = GetJniEnv()->NewDirectByteBuffer(m_rawData, len);
                            GetJniEnv()->CallVoidMethod(m_obj, setSurfaceImage, newArray);
                            frameCounter = 0;

                        }
                        frameCounter++;

                    }
                } else {
                    av_packet_unref(&packet);
                }
            } else {
                av_packet_unref(&packet);
            }
        }
    }
}

I am not entirely sure I am even doing that part correctly. If you see any errors in this, feel free to point them out.

Community
  • 1
  • 1
Falcuun
  • 97
  • 8
  • 1
    I wrote an answer in the assumption you meant a primitive array (`byte[]`) instead of an array of objects (`Byte[]`). – Botje Nov 18 '19 at 12:19
  • @Botje that is exactly what I need. I am trying to incorporate your answer into my code. Just dealing with ByteBuffer ```UnsupportedOperationException``` exceptions right now. – Falcuun Nov 18 '19 at 12:30
  • @Botje edited the question with some more details, based on your comment on your answer. – Falcuun Nov 18 '19 at 13:47
  • How big is your `ByteBuffer`? It should be 4*1920*1080 bytes. – Botje Nov 18 '19 at 13:49
  • @Botje No damn idea. I've added the C++ code that calls that JAVA Method from first edit. The entire Function. – Falcuun Nov 18 '19 at 13:56
  • 1
    Using `strlen` on arbitrary binary data does not make much sense. From your code it is clear you receive an RGB(24) bitmap, while android wants a ARGB image. You will need to ask FFmpeg to produce ARGB (or RGB8888), then tell Java you have a buffer of size `m_avFrame->width * m_avFrame->height * 4` (which should go in `decodedFrame->size`). Finally, make sure that `decodedFrame->data` remains *live* for the entire duration that Java is using it, otherwise your program will still crash. You could store `decodedFrame` somewhere in your object, for example. – Botje Nov 18 '19 at 14:07
  • 1
    @Botje Okay, so we had to do some code clean-up and a little structure fix to implement RGBA (Alpha being the last one for some reason). Problem is now solved and the stream is working just fine. Also dealt with some memory leaks in the code that were causing OOM exception to be thrown. Thank you so much for point out the Format. (Ain't using sizeof() anymore, that was a vague attempt at something that just stayed there...) – Falcuun Nov 18 '19 at 15:20

1 Answers1

1

You cannot cast native byte arrays to jbyteArray and expect it to work. A byte[] is an actual object with length field, a reference count, and so on.

Use NewDirectByteBuffer instead to wrap your byte buffer into a Java ByteBuffer, from where you can grab the actual byte[] using .array().

Note that this JNI operation is relatively expensive, so if you expect to do this on a per-frame basis, you might want to pre-allocate some bytebuffers and tell FFmpeg to write directly into those buffers.

Botje
  • 26,269
  • 3
  • 31
  • 41
  • So, I'm obtaining the ```ByteBuffer``` object, but calling ```.array()``` on it throws ```UnsupportedOperationException``` at me, and crashes everything. – Falcuun Nov 18 '19 at 12:44
  • Doh. It may be that `.array()` only works for `ByteBuffer`s that wrap a `byte[]`. Plan B: Use [`Bitmap#copyPixelsFromBuffer`](https://developer.android.com/reference/android/graphics/Bitmap.html#copyPixelsFromBuffer(java.nio.Buffer)) (found in [this answer](https://stackoverflow.com/a/39751247/1548468)) – Botje Nov 18 '19 at 12:50