0

Edit as i wasn't clear at first time:

I'm trying to use android MediaCodec to get each frame from existing video file(videoBefore.MP4) ,process the frame(like blur) and then encode each frame to a new video file(videoAfter.MP4).
The new video have to be in the same duration as the first.

Just 1 condition:
Every frame should be process with unlimited time,it mean that 10 sec video could take 1 minute for processing.

So far i saw only examples with quick processing (like blue shift) that could be done in real time.
Is there any way to grab the frame from the video,and then "take my time" to process it,and still preserved the new video with the same frame rate or frame timing?

*it could be better if i can preserve the audio too-but the frame is what important.

Thanks!

yarin
  • 543
  • 1
  • 5
  • 18

1 Answers1

1

You can take as long as you like. The timing of the frames is determined by the presentation time stamp embedded in the .mp4 file, not the rate at which the code is accessed.

You get the time value for each frame from MediaExtractor#getSampleTime(), pass it into the decoder's queueInputBuffer(), and receive it in the BufferInfo struct associated with the decoder's output buffer. Do your processing and submit the frame to the encoder, again specifying the time stamp in queueInputBuffer(). It will be passed through BufferInfo to the output side of the encoder, and you just pass the whole BufferInfo to MediaMuxer#writeSampleData().

You can see the extraction side in ExtractMpegFramesTest and the muxing side in EncodeAndMuxTest. The DecodeEditEncodeTest does the encode/decode preserving the time stamp, but doesn't show the MediaExtractor or MediaMuxer usage.

Bear in mind that the codecs don't really care about time stamps. It's just the extractor/muxer code that handles the .mp4 file that cares. The value gets passed through the codec partly as a convenience, and partly because it's possible for encoded frames to appear out of order. (The decoded frames, i.e. what comes out of the decoder, will always be in order.)

If you fail to preserve the presentation times, you will get video that either lasts zero seconds (and isn't very interesting), or possibly video that lasts a very, very long time. The screenrecord command introduced in Android 4.4 uses the time stamps to avoid recording frames when the screen isn't being updated.

fadden
  • 51,356
  • 5
  • 116
  • 166
  • As always-your amazing answers are so helpful,i will learn this answer part by part to get the result that i'm looking for.again,Thank you very much! BTW,do you have a email for public or any chat that you stay in?(just looking for more ways to get you:) ) – yarin Dec 03 '13 at 17:20
  • Sorry, just the stackoverflow. :-) – fadden Dec 03 '13 at 17:47
  • As a side question,do you have any idea how to store large arrays of byte to the SD card in the most quick way? i just grab frames in real time from glsurface using glreadpixels() but convery them to png or jpeg file reduce the frame rate under 10.any idea?( the resolution is high 960X720-in smaller resolution i get 15-16 frame rate-which is good) – yarin Dec 04 '13 at 19:51
  • I do this trick to after processing those frames into camera frames which i can control on with extractMpeg. – yarin Dec 04 '13 at 19:59
  • The description for http://bigflake.com/mediacodec/#ExtractMpegFramesTest mentions the timing breakdown for a Nexus 5. Most of the time is spent in PNG compression, so you can look for alternatives to that. The glReadPixels() call in my test was extremely slow, something I'm currently looking into, but I don't know a way to avoid it. – fadden Dec 04 '13 at 20:30
  • in my case glreadpixels(960,720) called more then 20 time in a second-which is excellent.it is inside onDraw() method.but how to save it without JPEG or PNG(JPEG is faster but i think it will loss quality)it must have a way.it is not related to the codec.i just grab it from game engine for after processing.it have to be way to save it – yarin Dec 04 '13 at 21:10
  • it is the last bottle neck in my complicated app.because the media coded allowed me to control everything expect collecting the glsurface frame before processing – yarin Dec 04 '13 at 21:50
  • I found the cause of the slow glReadPixels() -- I wasn't including an alpha plane in my EGLConfig. The glReadPixels() overhead went from 170us to 6us on a Nexus 5. The PNG compression overhead is still substantial; you may want to experiment with fast lossless compression code (e.g. LZO). – fadden Dec 06 '13 at 02:17
  • Yes it might be the direction,Thanks a lot fadden :) – yarin Dec 06 '13 at 13:27
  • Hi again Fadden, a related second thought to this subject.Cynogen mod release screen recording app this week.i don't have it yet,but it seems that it can capture everything.i have 1 question: if i will record while an app with 10 layers is running,and every surface layer(like glSurface) have parts with transparent pixels,so i can see all the layers in the same time(like many apps i know).Is that app will record it with the alpha pixels? because if the answer is yes,they need to save 10 frames in parallel and process it.am i miss something?it will recording the alpha pixels as black pixels? – yarin Dec 12 '13 at 08:47
  • I can't speak to what the Cyanogen screen recorder does. The Android 4.4 `screenrecord` command captures the output composited to a virtual display, which is implicitly backed by opaque black. So only one image is saved per frame, and there are no fully-transparent pixels in the final frames. (See also http://bigflake.com/screenrecord/ .) – fadden Dec 12 '13 at 15:59
  • just to verify that i understand,if my app have 2 surfaces,one above the second,the lower one is totally blue,the upper one is totally transparent-from the screenrecord i will get fully black(get the alpha=0 as black) or fully blue video capture(the command recognize that the upper surface is transparent? i understood that i will get the black.(i will check the example but just to be sure) – yarin Dec 12 '13 at 17:27
  • The surface composition is being done by SurfaceFlinger, so you'll get the same results in the `screenrecord` output as you will see on screen. In your example, you should see fully blue. – fadden Dec 12 '13 at 18:41
  • ok.i totally deviate from the subject,but i have to know-can i grab somehow the frame of the surfaceFilnger?because if yes-i solved all my problem in one code line :) – yarin Dec 12 '13 at 19:11
  • in short,im working more then 2 months on app that merged 2 surfaces and encode them(with ffmpeg for now),which surfaceFlinger make with a real better performence.so if i can get the final frame(in byte or any way)'with jni(or any way)-it will be best thing i can ever get.if you want i can post it another question.sorry for the interruption Fadden – yarin Dec 12 '13 at 20:40
  • Not from an app -- it would be equivalent to a screen capture, which isn't allowed (unless you have a rooted device). The virtual display composition is currently done with GLES -- it just renders a new surface from the set of windows on screen -- so you can get very good composition performance out of the GPU. – fadden Dec 12 '13 at 20:49
  • any direction how to do it?where to start?it is only few guide if any when google. – yarin Dec 12 '13 at 21:04
  • i have a screen with 2 surfaces,i just need to merged it as it showed on screen.i have a java class that i build that did it,how to use the gpu for this propose? it is for get the frames or process them? – yarin Dec 12 '13 at 21:06
  • You should post this as a separate question. – fadden Dec 12 '13 at 21:09
  • in my case,the lower surface is SurfaceView with camera attached,and the upper is glsurfaceView that runing animation using gles 2.0(andengine if you know).is there limitations for this case in case of gpu rendering? – yarin Dec 12 '13 at 21:11
  • Hi,try to run ExtractMpegFramesTest and runing into this loop,the input video is mp4 file which cpture with my device.any idea? the output: 12-17 13:36:51.400: D/ExtractMpegFramesTest(10956): textureID=1 12-17 13:36:51.405: I/OMXClient(10956): Using client-side OMX mux. 12-17 13:36:51.430: D/ExtractMpegFramesTest(10956): loop 12-17 13:36:51.440: D/ExtractMpegFramesTest(10956): input buffer not available 12-17 13:36:51.450: D/ExtractMpegFramesTest(10956): no output from decoder available 12-17 13:36:51.450: D/ExtractMpegFramesTest(10956): loop – yarin Dec 17 '13 at 11:46
  • im runing cynogenmod version 4.3.1 so and maybe it is not full supported yet – yarin Dec 17 '13 at 11:54
  • BTW,some of the examples using input and output surface-which is not supported by the API,and copy them from the source code it is possible but hard for beginners – yarin Dec 17 '13 at 19:03
  • fadden Hi again,i dont want to bother you but there is any chance to upload the Android Breakout game recorder patch example "pre included" with the game.apply patches look like big subject as a stand alone.maybe you have quick steps to apply it for non familiar with patches subject? – yarin Dec 21 '13 at 16:58
  • ok,the patch work. but in the gameRecorder class,where should if find this: private InputSurface mInputSurface; from where should i take this class? and another one. format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface); this line pop error too.i struggle with it. – yarin Dec 21 '13 at 17:23
  • i applied it with "apply patch" in the eclipse,but get some error of missing file.how to solve this problem?should i post new question for this subject? im lost here :/ – yarin Dec 21 '13 at 18:28
  • Make sure you're building for API 18 -- part of the patch updates `android:minSdkVersion` in the manifest. The best way to apply the patch is to download the Breakout sources with git, save the patch in the top directory, and use `git am ` to apply the patch with git. FWIW, if you want to see simultaneous screen rendering and recording in action, the "Show + capture camera" feature of Grafika also does this (https://github.com/google/grafika/). That's also juggling the camera so the code is a little more complex. – fadden Dec 21 '13 at 19:18