0

So my issue is that I get for a video call the frames in my c code as a byte array of I420. Which I then convert to NV21 and send the byte array to create the bitmap. But because I need to create a YUV Image from the byte array, and then a bitmap from that, I have a conversion overhead and that is causing delays and loss in quality. I am wondering if there is another way to do this. Somehow so that I can create the bitmap directly in the c code, and maybe even add it to the bitmap, or a surface view from the c code? Or just simply send the bitmap to my function so I can set it there, without needing to create the bitmap in Android.

This is what I do with the byte array in the c code:

if(size == 0)
    return;

jboolean isAttached;
JNIEnv *env;
jint jParticipant;
jint jWidth;
jint jHeight;
jbyteArray jRawImageBytes;

env = getJniEnv(&isAttached);

if (env == NULL)
    goto FAIL0;
//LOGE(".... **** ....TRYING TO FIND CALLBACK");
LOGI("FrameReceived will reach here 1");
char *modifiedRawImageBytes = malloc(size);
memcpy(modifiedRawImageBytes, rawImageBytes, size);
jint sizeWH = width * height;
jint quarter = sizeWH/4;
jint v0 = sizeWH + quarter;
for (int u = sizeWH, v = v0, o = sizeWH; u < v0; u++, v++, o += 2) {
        modifiedRawImageBytes[o] = rawImageBytes[v]; // For NV21, V first
        modifiedRawImageBytes[o + 1] = rawImageBytes[u]; // For NV21, U second
}

if(remote)
{
    if(frameReceivedRemoteMethod == NULL)
        frameReceivedRemoteMethod = getApplicationJniMethodId(env, applicationJniObj, "vidyoConferenceFrameReceivedRemoteCallback", "(III[B)V");

    if (frameReceivedRemoteMethod == NULL) {
        //LOGE(".... **** ....CALLBACK NOT FOUND");
        goto FAIL1;
    }
}

This is what I do in the Android java code:

   remoteResolution = width + "x" + height;
        remoteBAOS = new ByteArrayOutputStream();
        remoteYUV = new YuvImage(rawImageBytes, ImageFormat.NV21, width, height, null);
        remoteYUV.compressToJpeg(new Rect(0, 0, width, height), 100, remoteBAOS);
        remoteBA = remoteBAOS.toByteArray();
        remoteBitmap = BitmapFactory.decodeByteArray(remoteBA, 0, remoteBA.length);
        new Handler(Looper.getMainLooper()).post(new Runnable() {
            @Override
            public void run() {
                remoteView.setImageBitmap(remoteBitmap);
            }
        });

This is how the sample app of the SDK I am using had the sample. but I feel that this is not at all best practice, and there has to be a way to get the Bitmap quicker from the byte array, and preferably in the c code. Any ideas on how to improve this?

EDIT:

I modified my Java code. I know use this library: https://github.com/silvaren/easyrs

so my code will be:

 remoteBitmap = Nv21Image.nv21ToBitmap(rs, rawImageBytes, width, height);
        new Handler(Looper.getMainLooper()).post(new Runnable() {
            @Override
            public void run() {
                remoteView.setImageBitmap(remoteBitmap);
            }
        });

Where nv21ToBitmap does this:

 public static Bitmap yuvToRgb(RenderScript rs, Nv21Image nv21Image) {
    long startTime = System.currentTimeMillis();

    Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.U8(rs))
            .setX(nv21Image.nv21ByteArray.length);
    Type yuvType = yuvTypeBuilder.create();
    Allocation yuvAllocation = Allocation.createTyped(rs, yuvType, Allocation.USAGE_SCRIPT);
    yuvAllocation.copyFrom(nv21Image.nv21ByteArray);

    Type.Builder rgbTypeBuilder = new Type.Builder(rs, Element.RGBA_8888(rs));
    rgbTypeBuilder.setX(nv21Image.width);
    rgbTypeBuilder.setY(nv21Image.height);
    Allocation rgbAllocation = Allocation.createTyped(rs, rgbTypeBuilder.create());

    ScriptIntrinsicYuvToRGB yuvToRgbScript = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
    yuvToRgbScript.setInput(yuvAllocation);
    yuvToRgbScript.forEach(rgbAllocation);

    Bitmap bitmap = Bitmap.createBitmap(nv21Image.width, nv21Image.height, Bitmap.Config.ARGB_8888);
    rgbAllocation.copyTo(bitmap);

    Log.d("NV21", "Conversion to Bitmap: " + (System.currentTimeMillis() - startTime) + "ms");
    return bitmap;
}

This is faster. but still I feel there still is some delay. Now that I get my bitmap from renderscript instead of using a YUV Image. Is it possible to set it to my imageView somehow faster? or set it on a surfaceView somehow?

rosu alin
  • 5,674
  • 11
  • 69
  • 150
  • 1
    [ndk Bitmap](https://developer.android.com/ndk/reference/group___bitmap.html) ? – pskink Apr 24 '18 at 08:25
  • I'm new to the NDK. so please bear with me. I included inside my project, so I can now use the struct for the bitmap. But I don't quite get how to load the byte array into the bitmap yet. In theory, I call unlock pixels, add the data, and then call lockPixels, show the bitmap, and then call unlock pixels again and so on? – rosu alin Apr 24 '18 at 08:37
  • 1
    hmm [Writing a basic image filter in Android using NDK](http://ruckus.tumblr.com/post/18055652108/writing-a-basic-image-filter-in-android-using-ndk) ? also try other results from `google("AndroidBitmap_lockPixels")` method ;-) – pskink Apr 24 '18 at 08:39
  • yes thanks. that is what I was looking into now, Thanks a lot, this should be enough to help me figure it out – rosu alin Apr 24 '18 at 08:44
  • good luck then, `may the force be with you` – pskink Apr 24 '18 at 08:47
  • from what I understand here, they create the AndroidBitmapInfo, using a jobject bitmap. https://github.com/ruckus/android-image-filter-ndk/blob/master/jni/imageprocessing.c Where as I need an empty AndroidBitmapInfo, in which to load my byteArray. How can I instantiate and fill my byteArray in it, without having a jobject bitmap? – rosu alin Apr 24 '18 at 08:53
  • they need it to check the format is RGBA_8888 and get the size of input bitmap – pskink Apr 24 '18 at 08:55
  • basically you create a `Bitmap` in java (`Bitmap.create` method) and pass it to your native method for filling / modifying etc – pskink Apr 24 '18 at 08:57
  • yes, I understand that. BUT I get the byteArray in NDK. I need to get a bitmap object in the NDK from my byteArray. so that I can avoid the conversion overhead that I get for creating bitmaps on the SDK 30 times a second. – rosu alin Apr 24 '18 at 08:58
  • bassically I need to create a Bitmap from a byte array, and then send it to the SDK – rosu alin Apr 24 '18 at 08:59
  • see https://github.com/ruckus/android-image-filter-ndk/blob/master/src/com/example/ImageActivity.java#L61 - in your c code you just need to "unlock" pixels (i think it is allowed without "locking") so your bitmap is write-only from native code point of view – pskink Apr 24 '18 at 08:59
  • you could also try `Bitmap#copyPixelsFromBuffer` method if `AndroidBitmap_unlockPixels` is slow – pskink Apr 24 '18 at 09:09
  • is this: Bitmap#copyPixelsFromBuffer native code? – rosu alin Apr 24 '18 at 09:17
  • no, its java, but you can pass `ByteBuffer` to c in "no copy" mode – pskink Apr 24 '18 at 09:18
  • 1
    Instead of C++, consider [renderscript](https://stackoverflow.com/a/20360298/192373). This should be much faster for higher resolutions. But this will not deliver live video display at 30 or 60 FPS, `setImageBitmap()` will become the bottleneck. You don't need to produce RGB at all. You can use OpenGL with an appropriate shader instead, this saves a lot of time. – Alex Cohn Apr 24 '18 at 10:25
  • 1
    @AlexCohn thanks a lot for your answer. It helped me, and I edited my question. I get the Bitmap from renderscript now, just need to figure out a better way to include it in my Layout – rosu alin Apr 24 '18 at 11:55

0 Answers0