-1

I have to store continuous video streams from many ip cameras, The video is encoded in H.264 and the audio is in AAC or MP3. The recorded videos will be played mostly on mobile devices but also on browsers.

  • What would be the best strategy to build a scalable recorder service ?

  • What is the best storage format? mp4 ?

  • Should i convert the video directly to MP4 ? or is better to store RAW RTP ?

  • Whats the best way to ensure best reliability and less frame loses and avoid lost of sync between audio and video ?

  • I also want to hear similar experiences

Thanks!

gipsh
  • 578
  • 1
  • 3
  • 20

1 Answers1

0
  • What would be the best strategy to build a scalable recorder service ?

Globally, one physical device (PC i.e) running a main controller dameon, spawning one dedicated recorder by camera. For performance, in a mono-device case, it seems quite common to me.

  • What is the best storage format? mp4 ?

Resolution, compression, quality are complex question that can partially be reducted to simple maths :

Writing capacity = number or hard drive * HDD write bandwith - number of camera * encoded video bandwith.

One other way of taking it is the storage limit :

storage limit = number or hard drive * HDD capacity - number of camera * encoded video bandwith * time.

You should also check about the connection between your device and the cameras : ethernet limit = 100Mbps - number of camera * rawvideo bandwith.

Considering mp4, you need to compare it's visual quality with other format at equal bandwich, but it seems a good choice.

  • Should i convert the video directly to MP4 ? or is better to store RAW RTP ?

I think there is no need to store RAW datas in most of the cases. But this point may depend of the CPU/GPU part of your hardware.

What will be the first limit you reach : HDD size and writing speed, or encoding speed limit ?

If you can't encode fast enough, there is no other choice than writing RAW. If you can't write Raw, no other choice than encoding. Sometimes you won't be able to do anything : so lower the resolution, upgrade the hardware, use less cameras :)

Note that cameras can give different format, more or less ready to use. There is a world between raw YUVs and MJPEG !

Last point : If your application rely on multiple physical device, the "smaller" of them can be specialized on acquisition while the bigger can collect, convert and store data.

  • Whats the best way to ensure best reliability and less frame loses and avoid lost of sync between audio and video ?

Buy good cameras. Don't use too long wires if there are wire. Take care of them both. Don't use more cameras than your system can manage.

  • I also want to hear similar experiences

I'm currently working on embedded device running linux, managing up to 4 USB cameras. As i needed an interactive overlayed interface on my video, i switched from ffmpeg to my own python script. This product will be soon on the market, all the prototype are already sold. To increase performance (the FPS is too low to be perfect, mainly due to display and overlay), i'm currently working on a C version of the program.

We have some differences in our project : i don't need to save streams, only pictures. As i display one camera at time on screen, there was also no need to use sub-process for each camera, as you would need. So i won't be able to be more precise on these points. From experience : you don't want to do any specific developpement for your project concerning video conversion and capture. Your question is tagged ffmpeg, stick to it as long as it can do what you need.

Your question has been down-voted for being to wide, but when conceiving new service, many questions are legitimate to ask, as they are less documented than pure code.

technico
  • 1,192
  • 1
  • 12
  • 22