1

I'm using the openh264 lib, in a c++ program, to convert a set of images into a h264 encoded mp4 file. These images represent updates to the screen during a session recording.

Lets say a set contains 2 images, one initial screen grab of the desktop and another one, 30 seconds later, when the clock changes.

Is there a way for the stream to represent a 30 seconds long video using only theses 2 images?

Right now, I'm brute forcing this by encoding multiple times the first frame to fill the gap. It there a more efficient and/or faster way of doing this.

skarack
  • 11
  • 1

1 Answers1

0

Of course. Set a frame rate of 1/30 fps and you end up with 1 frame every 30 seconds. It doesn't even have to be in the H.264 stream - it can be done also when it gets muxed into an mp4 file afterwards for example.

Florian Zwoch
  • 6,764
  • 2
  • 12
  • 21
  • But then I will be stock with 1/30 fps. My example was overly simplistic. A better one could be someone opening a pdf, scrolling, reading, scrolling, etc. In this case, the scrolling part might not require any duplicated frame but the reading part will. – skarack Jun 07 '17 at 22:39
  • Then what is your FPS strategy here? Who decides when to drop or record frames? Anyway, in almost any common scenario these things are done via variable frame rates and samples being time stamped. See MP4 files or RTP streams. H.264 streams themselves are not aware of timing (they can carry optional timeing information though). – Florian Zwoch Jun 08 '17 at 08:36
  • I never considered variable frame rate. That term didn't come up while looking for a solution but now that I include it in m searches, I see similar issues. If you could just edit your answer to include the VFR bit, I will accept your answer. – skarack Jun 08 '17 at 12:12