3

I am support the application with videochat functions. I am use Camera2 for API>=21. Camera works. Now I need receive data from the camera of my device and write it into a byte [],and then pass the array to a native method for processing and transmitting images opponent. Video transfer functionality written in C ++.My task - to properly record video in byte [] (because this argument accepts a native method, which is to carry out all next actions on the video display).

if I start something to add, the camera stop working. Help me correctly and easy as possible implement this task. I tried to use MediaRecorder , but it does not write data in the byte []. I watched standart Google-examples such as Camera2Basic, Camera2Video . I tried to realize MediaRecorder like in this tutorials. but it does not work. ImageReader as I understand, used only for images. MediaCodec- it is too complicated, I could not really understand. What the better and eaziest way to implement for obtaining image from camera of my device and for recording it into byte[]. and if possible,give me sample of code or a resource where I can see it. Thanks

Jackky777
  • 644
  • 12
  • 19
  • Have you looked at Allocations? http://developer.android.com/reference/android/renderscript/Allocation.html – rcsumner Mar 30 '16 at 16:36
  • @Sumner - I tried this. it not works https://android.googlesource.com/platform/cts/+/de096f7/tests/tests/hardware/src/android/hardware/camera2/cts/AllocationTest.java#493 – Jackky777 Mar 30 '16 at 19:19
  • also I dont understand what to do with TextureView. example dont use it – Jackky777 Mar 30 '16 at 19:20

1 Answers1

15

You want to use an ImageReader; it's the intended replacement of the old camera API preview callbacks (as well as for taking JPEG or RAW images, the other common use).

Use the YUV_420_888 format.

ImageReader's Images use ByteBuffer instead of byte[], but you can pass the ByteBuffer directly through JNI and get a void* pointer to each plane of the image by using standard JNI methods. That is much more efficient than copying to a byte[] first.


Edit: A few more details:

This is assuming you have your own software video encoding/network transmission library, and you don't want to use Android's hardware video encoders. (If you do, you need to use the MediaCodec class).

  1. Set up preview View (SurfaceView or TextureView), set its size to be the desired preview resolution.
  2. Create ImageReader with YUV_420_888 format and the desired recording resolution. Connect a listener to it.
  3. Open the camera device (can be done in parallel with the previous steps)
  4. Get a Surface from the both the View and the ImageReader, and use them both to create a camera capture session
  5. Once the session is created, create a capture request builder with TEMPLATE_RECORDING (to optimize the settings for a recording use case), and add both the Surfaces as targets for the request
  6. Build the request and set it as the repeating request.
  7. The camera will start pushing buffers into both the preview and the ImageReader. You'll get a onImageAvailable callback whenever a new frame is ready. Acquire the latest Image from the ImageReader's queue, get the three ByteBuffers that make up the YCbCr image, and pass them through JNI to your native code.
  8. Once done with processing an Image, be sure to close it. For efficiency, there's a fixed number of Images in the ImageReader, and if you don't return them, the camera will stall since it will have no buffers to write to. If you need to process multiple frames in parallel, you may need to increase the ImageReader constructor's maxImages argument.
Eddy Talvala
  • 17,243
  • 2
  • 42
  • 47
  • 2
    Can you give me an insight on JNI processing, I am not sure what to do with ByteBuffers in native code, you linked to NewDirectByteBuffer function. I need to pass video frame as byte[] to opentok using BaseVideoCapturer.provideByteArrayFrame – Tomasz Kryński Jun 22 '18 at 16:51
  • Is it also possible to pass-through the `Image`s you get from the `ImageReader` into a MediaRecorder to record them to a file as well? See my question here: https://stackoverflow.com/questions/76914334/ – mrousavy Aug 16 '23 at 14:34