I developed an application which is running on windows OS. This application is decoding incoming video streams and displays on the screen. Lets say I have one 4K monitor and application will display 4 4K streams on that monitor, so every stream will be displayed as 1080p. Decoding 4 4K streams and scaling them to 1080p to display is consuming many resources, especially decoding part even for decoding with GPU. Is it possible to decode some pixels (like every 1 of 4 pixel) of 4K stream then interpolating it to 1080p video? Or is there any other way that will consume less resource while decoding many 4K streams? (I am using ffmpeg for decoding purposes)
Asked
Active
Viewed 141 times
1 Answers
0
Is it possible to decode some pixels (like every 1 of 4 pixel) of 4K stream then interpolating it to 1080p video?
No, that’s not possible, video is compressed by transforming it from the spacial domain to the frequency domain. The resulting video bitstream does not even contain pixels Decoding is the process of reversing this operation. It is impossible (or at least impractical) to resample something while in he frequency domain, it must be converted back the the spacial domain first.
Or is there any other way that will consume less resource while decoding many 4K streams?
Use a hardware decoder, like nvdec, or quicksync if it is available on the video card of the computer.

szatmary
- 29,969
- 8
- 44
- 57
-
Thank you for the answer @szatmary. I am using hw decoding, but for the hw decoding case using my application I can reach maximum 175 fps 4K video decode. Is there a decode trick for H264/HEVC to get higher decode performances? Maybe something like scaled decoding? – eruslu May 28 '20 at 06:19
-
No, no trick, no magic. There is no such thing as “scaled decoding” – szatmary May 28 '20 at 06:22