1

I am working on live streaming a WiFi camera to my Android tablet. I have the frame grabber running in a Thread, which in turn takes the pixels and passes them to RenderScript to do some filter processing (another Thread). My output Allocation is linked to a Surface for viewing.

The app will crash periodically with SIGSEGV faults, the monitor says it's happening in either a Thread, GCDaemon or JNISurfaceTexture. I have 2 kernels I am currently running (switchable) and both will fail eventually. The more basic kernel is just a pixel [] from the camera into the input Allocation, where it is sent to RenderScript and then the result output Allocation of the 'forEach' call is sent to the surface using .ioSend().

If I take the pixel [] array from the camera thread and copy it directly to the output Allocation, and call .ioSend(), it never crashes (i.e. circumventing RenderScript calls). I can also create another output allocation (temp one) and use this as the return output allocation of the 'forEach' call, copy this to the Surface linked output Allocation and it will not crash, although I do get some strange pixelation effects in the video.

I'm still a bit new to RenderScript but could there be some thread safety issues I am not aware of? Or perhaps a bug in RS()?

Here is how I am configuring the input and output Allocations:

 android.renderscript.Element elemIN = android.renderscript.Element.createPixel(mRS, android.renderscript.Element.DataType.UNSIGNED_8, android.renderscript.Element.DataKind.PIXEL_RGBA);
        Type.Builder TypeIn = new Type.Builder( mRS, elemIN );

        mAllocationIn = Allocation.createTyped( mRS,
            TypeIn.setX( videoWidth ).setY( videoHeight ).create(),   
            Allocation.MipmapControl.MIPMAP_NONE,    
            Allocation.USAGE_SCRIPT );

and

  mAllocationOut = Allocation.createTyped( mRS, TypeOUT.setX( videoWidth ).setY( videoHeight ).create(),   
            Allocation.MipmapControl.MIPMAP_NONE,    
            ( Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT ) );  

Here is my simple RGB kernel:

 uchar4 __attribute__((kernel)) toRgb_Color( uchar4 in ) {
    float4 ndviPixel;
    uchar4 out;

    ndviPixel.r = ( float )( in[0] / 255.0 );
    ndviPixel.g = ( float )( in[1] / 255.0 );
    ndviPixel.b = ( float )( in[2] / 255.0 );
    ndviPixel.a = 1.0f;

    out = rsPackColorTo8888(ndviPixel);
    ndviPixel = 0;

    return out;
}

Lastly, my call to the kernel is:

mScript.forEach_toRgb_Color( mAllocationIn, mAllocationTemp );

UPDATE

Here's how I'm declaring my TypeOUT:

  mAllocationOut = Allocation.createTyped( mRS, TypeOUT.setX( videoWidth ).setY( videoHeight ).create(),   
            Allocation.MipmapControl.MIPMAP_NONE,    
            ( Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT ) ); 

Also, I am waiting for the surface to be created from the onSurfaceTextureAvailable event like so:

  public void onSurfaceTextureAvailable( SurfaceTexture surfaceTexture, int width, int height ) {
      mSurface = new Surface( surfaceTexture );
}

After I create my input and output allocations, I use the latched 'mSurface' to set the output surface of the output allocation, like this:

        mAllocationOut.setSurface( mSurface );

I have mSurface declared as static if that makes any difference. I've tried to with and without static and I still get the crash.

Monitor output is here:

04-23 12:59:54.752: A/libc(15192): Fatal signal 11 (SIGSEGV), code 1, fault addr 0x0 in tid 15230 (Thread-1697)
04-23 12:59:54.853: I/DEBUG(189): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
04-23 12:59:54.853: I/DEBUG(189): Build fingerprint: 'nvidia/wx_na_wf/shieldtablet:5.0.1/LRX22C/29979_515.3274:user/release-keys'
04-23 12:59:54.853: I/DEBUG(189): Revision: '0'
04-23 12:59:54.853: I/DEBUG(189): ABI: 'arm'
04-23 12:59:54.854: I/DEBUG(189): pid: 15192, tid: 15230, name: Thread-1697  >>> helios.android <<<
04-23 12:59:54.854: I/DEBUG(189): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0
04-23 12:59:54.876: I/DEBUG(189):     r0 6f81a568  r1 00000001  r2 00000000  r3 00000000
04-23 12:59:54.877: I/DEBUG(189):     r4 630a3200  r5 6f81a568  r6 00000000  r7 00000001
04-23 12:59:54.877: I/DEBUG(189):     r8 12c24000  r9 7c9a0f40  sl 7e86d404  fp 00000008
04-23 12:59:54.877: I/DEBUG(189):     ip 7f8e1a10  sp 7f8e1970  lr 4211475d  pc 420d3f72  cpsr 200f0030
04-23 12:59:54.878: I/DEBUG(189): backtrace:
04-23 12:59:54.878: I/DEBUG(189):     #00 pc 000d3f72  /system/lib/libart.so (void std::__1::__tree_remove<std::__1::__tree_node_base<void*>*>(std::__1::__tree_node_base<void*>*, std::__1::__tree_node_base<void*>*)+205)
04-23 12:59:54.878: I/DEBUG(189):     #01 pc 00114759  /system/lib/libart.so (art::gc::allocator::RosAlloc::RefillRun(art::Thread*, unsigned int)+232)
04-23 12:59:54.878: I/DEBUG(189):     #02 pc 00114973  /system/lib/libart.so (art::gc::allocator::RosAlloc::AllocFromRun(art::Thread*, unsigned int, unsigned int*)+490)
04-23 12:59:54.879: I/DEBUG(189):     #03 pc 0028ba97  /system/lib/libart.so (artAllocObjectFromCodeInitializedRosAlloc+98)
04-23 12:59:54.879: I/DEBUG(189):     #04 pc 000a23cb  /system/lib/libart.so (art_quick_alloc_object_initialized_rosalloc+10)
04-23 12:59:54.879: I/DEBUG(189):     #05 pc 001d6359  /data/dalvik-cache/arm/system@framework@boot.oat
04-23 12:59:55.360: I/DEBUG(189): Tombstone written to: /data/tombstones/tombstone_01
04-23 12:59:55.361: I/BootReceiver(659): Copying /data/tombstones/tombstone_01 to DropBox (SYSTEM_TOMBSTONE)
Mat DePasquale
  • 199
  • 2
  • 13

2 Answers2

0

maybe there is really some fatal memory error like you say(do you get OOM?try to catch this)...are you streaming the content? maybe you want to buffer to much...maybe you give the surfaceflinger a too big buffer...since you control directly the pixels and buffer sizes, many errors can occur if not carefully done...are you maybe locking the surface in your app? that would mean you cant control the size of the canvas and therefore the buffer size anymore... im sorry if io cant help you furthermore but have you allready search for the errors you get on google?

Ilja KO
  • 1,272
  • 12
  • 26
0

The issue is the way you are accessing the input Allocation. Each element in the Allocation is provided with all 4 components. But, it cannot be treated as an array as done here. Try this instead:

uchar4 __attribute__((kernel)) toRgb_Color( uchar4 in ) {
    float4 tmpPixel = convert_float4(in);

    //  This copy is most likely unnecessary, but done for
    //  completeness.
    float4 ndviPixel.r = tmpPixel.x;
           ndviPixel.g = tmpPixel.y;
           ndviPixel.b = tmpPixel.z;
           ndviPixel.a = 255.0;
    uchar4 out = rsPackColorTo8888(ndviPixel);
    return out;
}
Larry Schiefer
  • 15,687
  • 2
  • 27
  • 33
  • Good stuff, glad you were able to get that working ok. I was guessing at what/how you were doing to pixel manipulations, so I'm not surprised that it had some trouble. But, the key thing which solved your issue (the crash) was the correct use of the `uchar4` as a vector rather than an array. – Larry Schiefer Apr 23 '15 at 14:25
  • Bummer. I think I spoke too soon. I stripped down my RGB kernel to just return the 'uchar4 in' only and I still get a crash after a period of time. – Mat DePasquale Apr 23 '15 at 14:43
  • Is `TypeOut` created in the same way as `TypeIn`? Update your original post with the logcat (crash) output. – Larry Schiefer Apr 23 '15 at 14:46
  • Also note, for the output `Allocation` being a direct I/O type allocation, you'll need to set the I/O target with a `SurfaceTexture`. – Larry Schiefer Apr 23 '15 at 14:47
  • I added more information to my original post. I changed the code back to using an intermediate output allocation, same settings as mAllocationOut but without the IO_SEND flag set. I verified that then copying this intermediate allocation to the output allocation (linked to the surface) does not crash at all. However, it does have some pixelation anomalies (streaks?). Starting to think this is not code related and either an RS or Android bug. – Mat DePasquale Apr 24 '15 at 14:12
  • I have narrowed the call down to mAllocationOut.ioSend() that causes the crash. If I remove that call, it doesn't crash (obviously this kills the output to the Surface). – Mat DePasquale Apr 24 '15 at 14:39
  • It's highly unlikely to be an Android RS core issue. It's possible that it could be a vendor specific RS implementation problem. Have you tried it on an emulator or other device? RS has been in place since Froyo (internal only), publicly available since Honeycomb, and continues to evolve and be enhanced. – Larry Schiefer Apr 24 '15 at 14:59
  • You can force CPU execution by doing "adb shell setprop debug.rs.default-CPU-driver 1". I also wanted to note that the array notation mentioned above does work for vectors. It just only ever lets you get access to 1 component at a time. – Stephen Hines Apr 25 '15 at 02:20
  • Thanks, @StephenHines for the CPU info as well as the array indexing. I didn't realize the indexes would work like that. – Larry Schiefer Apr 25 '15 at 02:35
  • If you can add more of your source code, I would suggest filing a bug at https://code.google.com/p/android/issues/list so that we can take a closer look. Nothing else really jumps out at me, other than some initial thoughts similar to the first poster who suggested that perhaps OOM was occurring. If you have made sure you aren't using too much memory (and holding onto it forever), the next thing would be for us to reproduce your results (hence my bug suggestion). – Stephen Hines Apr 26 '15 at 17:08
  • Thanks for the tip Stephen. I had no clue you could do that either. I will be running lots of experiments today, I'll let you know the outcome. Thanks again to both you and Larry for taking the time to answer my question. – Mat DePasquale Apr 27 '15 at 13:40
  • I am able to run my 3 kernels successfully on the CPU. The performance is equal, or better, than the GPU. I am running on an NVIDIA Shield. I was expecting the GPU to be much better but it seems like they have a bug in their code. Another thing of note: when I use the intermediate allocation (mentioned above) and copy to the Surface output allocation --using the CPU -- I get no artifacts as I do when using teh GPU. Seems like something is definitely amiss with the NVIDIA internals. – Mat DePasquale Apr 27 '15 at 14:15