0

I'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.

So I replaced its AV_PIX_FMT_MONOWHITE with AV_PIX_FMT_RGB24, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .

To stuff its pixel array called data, I've tried many variations on

for (int y=0; y<HEIGHT; ++y) {
  for (int x=0; x<WIDTH; ++x) {
    uint8_t* rgb = data + ((y*WIDTH + x) *3);
    const double i = x/double(WIDTH);
//  const double j = y/double(HEIGHT);
    rgb[0] = 255*i;
    rgb[1] = 0;
    rgb[2] = 255*(1-i);
  }
}

At HEIGHTxWIDTH= 80x60, this version yields screenshot of red-to-blue stripes, when I expect a single blue-to-red horizontal gradient.

640x480 yields the same 4-column pattern, but with far more horizontal stripes.

640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.

Vertical gradients behave even more weirdly.

Appearance was unaffected by an AV_PIX_FMT_RGBA attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.

The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.

Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.

The streamer emits one warning, which I doubt affects appearance:

[rtp @ 0x269c0a0] Encoder did not produce proper pts, making some up.

So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame())?

Community
  • 1
  • 1
Camille Goudeseune
  • 2,934
  • 2
  • 35
  • 56
  • I am also trying to compute frames on the fly and stream a video, and I'm having a lot of trouble. Did you ever accomplish this, and do you have your code somewhere? – DankMemes Nov 12 '16 at 05:21
  • Yes. It's in the answer I posted on May 14, 2013. I didn't include a complete code example because almost all of that would be appropriate only for a complete tutorial on computing frames on the fly. The links in this Q and these A's should suffice to let you write code that's useful to your own situation. – Camille Goudeseune Nov 16 '16 at 21:36

2 Answers2

2

Use avpicture_fill() as described in Encoding a screenshot into a video using FFMPEG .

Instead of passing data directly to sws_scale(), do this:

AVFrame* pic = avcodec_alloc_frame();
avpicture_fill((AVPicture *)pic, data, AV_PIX_FMT_RGB24, WIDTH, HEIGHT);

and then replace the 2nd and 3rd args of sws_scale() with

pic->data, pic->linesize,

Then the gradients above work properly, at many resolutions.

Community
  • 1
  • 1
Camille Goudeseune
  • 2,934
  • 2
  • 35
  • 56
  • Anton, thanks for the clarification. Nevertheless I propose to "accept my own answer" (tomorrow when SO lets me), because it generalizes to formats other than `AV_PIX_FMT_RGB24` and hides the computation of linesize. – Camille Goudeseune May 15 '13 at 15:23
  • What's the *data* means? How it defined? – JavaRunner Apr 02 '17 at 00:09
  • It's the second argument to `avpicture_fill()`. Study the code in the answer that I linked to, and the documentation for that function. – Camille Goudeseune Apr 03 '17 at 17:10
1

The argument srcStrides passed to sws_scale() is a length-1 array, containing the single int HEIGHT.

Stride (AKA linesize) is the distance in bytes between two lines. For various reasons having mostly to do with optimization it is often larger than simply width in bytes, so there is padding on the end of each line.

In your case, without any padding, stride should be width * 3.