1

I would like to load video from file, make some transformation on it and render it back into file. Said transformation is mainly two videos overlapping and shifting one of them in time. Grafika has some examples relevant to this issue. RecordFBOActivity.java contains some code for rendering video file from surface. I'm having trouble changing two things:

  • instead of rendering primitives in motion I need to render previously decoded and transformed video
  • I would like to render surface to file as fast as posible, not along with playback

My only success so far was to load .mp4 file and add some basic seeking features to PlayMovieActivity.java.

In my reasearch I came across these examples, which are also using generated video. I didn't found them quite useful, because I couldn't swap this generated video with decoded one from file.

Is it posible to modify code of RecordFBOActivity.java so it can display video from file instead of generated animation?

Krzysztof Kansy
  • 305
  • 2
  • 13
  • I have absolutely no idea what you are actually asking ... :( – Fildor Apr 14 '15 at 10:52
  • I've edited the question, so it's more clear now, I hope it helped. – Krzysztof Kansy Apr 14 '15 at 12:12
  • You still want to render primitives; the difference is that the primitives would be textured from the decoded video. One example of this is the "texture from camera" activity -- if you replace the camera with a video decoder, you would be able to render your decoded video at an arbitrary size onto an EGL surface. Rendering at full speed is easy -- just don't sleep. You do need to track the presentation times to ensure that two videos recorded at different speeds aren't mis-merged. – fadden Apr 14 '15 at 15:38
  • FWIW, RecordFBOActivity is all about efficiently rendering everything twice, once to the screen and once to the Surface input of a MediaCodec. If you don't want to display frames as it works (which slows things down) you can just draw on the single EGL surface. – fadden Apr 14 '15 at 15:39
  • Thanks @fadden, this is really helpful. I've figured out that CameraCaptureActivity would be better to start with. Now I'm recording video from file at playback speed using it's code. Your first comment will be helpful for making full speed rendering. For now I'll stick with CameraCaptureActivity and try to apply your suggestions to my code. I'm using it instead of TextureFromCameraActivity because it already contains rendering to file. I'll post my results when I'm done. – Krzysztof Kansy Apr 15 '15 at 07:29
  • @fadden, I've changed my code so the file playback is at full speed. But now I'm struggling with encoding this video at the original speed, not this super-fast. I think that I should tinker with VideoEncoderCore.java, but for now I had no good results. Can you tell me how to make encoder to adapt playback speed to the original one? – Krzysztof Kansy Apr 15 '15 at 11:04
  • The pace of the final video depends on the presentation timestamps you set for each frame. You'll need to establish a common timeline for the blended video. The camera-based activities are just using the timestamp from the camera frame, which is not what you want -- you want to use the timestamps from the decoded video, matched up between the two streams. If they're both recorded at the same, fixed frame rate, it should be straightforward. The timestamp lives in the muxed output, not the raw H.264; it's passed through the codec to VideoEncoderCore line 192, which pulls it out of BufferInfo. – fadden Apr 15 '15 at 16:08
  • Thank you for your efforts, @fadden. Does that mean that I have to calculate frametime of the original video first, and then change presentationTimeUs in mBufferInfo everytime VideoEncoderCore writes sample data to muxer? – Krzysztof Kansy Apr 16 '15 at 07:25
  • Essentially, yes. Assuming the videos have a common, fixed frame rate, you can sample the frame rate from the first few frames to determine what the rate is, and generate timestamps at that rate. If one video starts first and ends last, and both videos have the same, fixed frame rate, you could just use those timestamps. If the frame rates are different or variable, you'd need to construct your own frame timeline and fit the video frames from each stream in it (which can get messy). – fadden Apr 16 '15 at 16:05
  • For now I decided to work on a single, unmodified video file. I have implemented rendering to file, but it is way to slow for my application (on Samsung Galaxy Note 4 it takes around length of video sample). I tried to create full speed rendering with SpeedControlCallback, now it releases one frame at a time, then it waits for permission to release the next one in while loop with sleep. This permission comes from TextureMovieEncoder, right after drawing frame and swaping buffer. @fadden, can you tell me if my approach is correct? – Krzysztof Kansy Apr 20 '15 at 12:29
  • There is no reason to sleep. Feed frames to the encoder as fast as you can, and use a nonzero delay while acquiring an input buffer. http://bigflake.com/mediacodec/#DecodeEditEncodeTest – fadden Apr 20 '15 at 15:19
  • The problem is, when I keep pushing frames as fast as I can onto a surface (I'm doing it with MoviePlayer btw), the TextureMovieEncoder skips some frames (due to encoding time I guess), so the result movie seems much faster than original. Therefore I thought that MoviePlayer should sleep while encoding is in progress. – Krzysztof Kansy Apr 21 '15 at 08:53

1 Answers1

1

You can try INDE Media for Mobile, tutorials are here: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials

Sample code showing how to enable editing or make transformation is on github: https://github.com/INDExOS/media-for-mobile

It has transcoding\remuxing functionality in MediaComposer class and a possibility to edit or transform frames. Since it uses MediaCodec API inside encoding is done on GPU so is very battery friendly and works as fast as possible.

enter image description hereenter image description here

Marlon
  • 1,473
  • 11
  • 13