1

I am working on the effects of network losses in video transmission. In order to simulate the network losses I use a simple program which drops random RTP packets from the output of H.264 RTP encoding.

I use Joint Model (JM) 14.2 in order to encode the video. However, I don't use AnnexB format as my output, instead I choose the output as RTP packets. The JM output is generated as RTP packets with RTP headers and payload as a sequence. After that, some of RTP packets are dropped by using a simple program. Then, I can decode the output bitstream by using also JM and it's error concealment methods.

The main purpose of this process is to evaluate the differences created by network losses on the human video quality perception. In order to measure the perceived quality, the shown video must be in its decoded form (i.e. full resolution) or it can be decodable at the receiver side. The RTP packets created by the JM Encoder cannot be decoded without the JM software installed. However, with the proper header (or container) most video players are able to decode the bitstream. So, the my goal in this question is to encapsulate my encoded RTP packet bitstream in a common container such as AVI or MP4 to have my content decodable at the receiver computer.

The format of the encoded bitstream in RTP packetized form is as follows:

     ----------------------------------------------------------------------
     | RTP Header #1 | RTP Payload #1 | RTP Header #2 | RTP Payload #2 |...
     ----------------------------------------------------------------------

In order to find the video quality, I want to make a subjective test with these bitstreams. I can make these test by using the full resolution data decoded by myself whereas it's very inconvenient to crowdsource this subjective test with GBs of video data on the Internet. So, I want to mux these bitstreams into a container (i.e. AVI) by using FFMPEG. I have tried to decode these bitstreams with FFMPEG and FFPLAY; however, both of them didn't work. I also tried the following command and it didn't work, either.

    ffmpeg - f h264 -i <raw_rtpDropped.264> -vcodec copy -r 25 out.avi

Which format or muxer should I use? Do I need to convert these files to any other format?

Grad
  • 118
  • 10
  • If i'm not mistaken, you have confused "multiplexing" with "encoding". if your data is YUV, then it is "decoded" or "raw". if you want to compress it, then you need to "encode it". if you want good quality/low bit rate then go for h264 (-vcodec h264 I beleive) encoder. – NiRR Aug 18 '13 at 12:56
  • No, I'm not confused about the terms. I have encoded my video by using JM and get the output in RTP packets mode, not AnnexB mode. After I get my "encoded" bistreams, I need to decode them in order to show to the subjects. But, I don't want to decode and obtain GBs of YUV(raw) video. Instead, I want to mux these bitstreams into AVI (or MP4, the container doesn't matter) for storing the video data in much smaller sizes. So that the videos can be placed in some server and I can direct "subjects" (or viewer/voters) to that web page. That's the easiest way of crowdsourcing. So I need muxing :) – Grad Aug 19 '13 at 23:53

1 Answers1

0

I think I'll attempt to convince you once more: Encoding is a method of taking raw video and compressing it. This reduces the size of the video, which is what you want, and also reduces the quality (you can't get something for nothing). Multiplexing is a term used in many sciences, and it means taking two or more data streams and turning them into 1. When you mux video, you usually mean that you take encoded video and add audio, or just video but put it in a container such as mpeg2 transport, or mpeg4 ISO based containers. AVI is also a container/ multiplex of video (hence the name audio video interleave) so it does not solve your issue with GB of data. from wikipedia: "An AVI file may carry audio/visual data inside the chunks in virtually any compression scheme, including Full Frame (Uncompressed), Intel Real Time (Indeo), Cinepak, Motion JPEG, Editable MPEG, VDOWave, ClearVideo / RealVideo, QPEG, and MPEG-4 Video."

NiRR
  • 4,782
  • 5
  • 32
  • 60
  • For my purpose, the video should be either decoded (i.e. full resolution) or decodable at the destination. Muxing (or encapsulating) the **encoded** RTP packets is needed to avoid the transfer of the raw YUV data. I can already encode the video and I know what a container is. Encoded H.264 bitstreams in the RTP packetized form (not the AnnexB form which have 0x001 NALU headers) cannot be decodable at the receiver side without JM. However, with the proper header (or container) most video players are able to decode the bitstream. I changed the question to explain myself better. Please check. – Grad Oct 07 '13 at 01:12
  • If I understand correctly, you have h264 non-annex B encapsualted in this special rtp thing that only JM can decode, and you're looking to strip the rtp, and put the h264 inside something else that can easily be demuxed and then decoded. iso-based containers (mp4, frv) are used together with non-annex B regularly but the problem is that with no annex B then the encoder doesn't know the stream parameters (resolution, picture structure, etc.) which are usually in the annex B, Is that extra data in the RTP headers? ffmpeg probably doesn't know what to with them, can you create the data manually? – NiRR Oct 07 '13 at 07:33
  • Yes, that is my problem. Ffmpeg generally get the parameters from the headers of given data. Even assuming there is no header of RTP packets, ffmpeg can easily take those from input parameters. Alas, I need that ffmpeg command because I couldn't find it on the web. I didn't understand the part of your question "...,can you create the data manually?" If the question is whether I know the resolution, structure etc., I can supply these information to ffmpeg. If the question is creating the **encoded** bitstream data manually, I clearly can't. :) – Grad Oct 12 '13 at 11:16
  • I don't know much about JM, but can you output the same movie with AnnexB format as well as RTP and then reunite the Annex B headers with the stream later on? – NiRR Oct 13 '13 at 07:21
  • That may be a good idea, but it may also be difficult to find which RTP packet loss corresponds to which AnnexB slice. My main aim was to find an ffmpeg command to ease my work. :( – Grad Oct 14 '13 at 19:50