3

I just set up a 'new' PC with built-in Intel video and an Nvida card: mostly to speed up video processing with ffmpeg and other programs. At first the built-in Intel was disabled, running only the Nvidia card. ffmpeg worked as expected, the CPU could be used for decoding and encoding.

However: VirtualDub, a program I use frequently, has a problem with Nvidia cards (at least on Windows 10). The display gets screwed up, previews don't work, and all sorts of other problems occur. I tried all of the various discussion boards, and nobody has a good solution. (The problem is apparently split between VirtualDub and Nvidia, as all other programs such as VideoLan, Avidemux, HandBrake, OBS studio, etc, all appear to work fine.)

So I re-enabled the on-board Intel adapter, and made that my primary and only video with a monitor. The Nvidia card is still there, but with no monitor attached. I really only need it for hardware acceleration.

HandBrake and OBS Studio found the card and used it with no problem.

However, my batch file that specified cuda for both decoding and encoding failed to run. The ffmpeg command that includes -hwaccel cuda resulted in:

[h264 @ 000002783beaa700] Hardware is lacking required capabilities 
[h264 @ 000002783beaa700] Failed setup for format cuda: hwaccel initialisation returned error.

I also tried -hwaccel nvenc, which is rejected. It's apparently not a synonym in this version of ffmpeg:

ffmpeg version 4.3.1-2021-01-01-full_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10.2.0 (Rev5, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100

When I use QSV acceleration on my other PC I have to do this:

-init_hw_device qsv=qsv -hwaccel qsv

so I tried

-init_hw_device cuda=cuda -hwaccel cuda

but that didn't work either.

I've seen comments about the ability to select the GPU if there is more than one board installed, using the -gpu option. However, when I try to use -gpu 0 or -gpu 1 I get:

Codec AVOption gpu (Selects which NVENC capable GPU to use. First GPU is 0, second is 1, and so on.) specified for input file #0 (xxx.avi) is not a decoding option.

I looked at:

https://github.com/FFmpeg/FFmpeg/commit/527a1e213167123d24d014bc0b956ef43d9d6542

to get more information on -init_hw_device, but I'm sorry to say that what's on that page makes no sense to me at all. There are no examples, and no explanation of how to actually select a device.

I looked at:

https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/

which has an 'example' of -init_hw_device, and I did a cut and paste of what they had there to my batch file, but it was rejected.

I also looked at:

How to to burn subtitles based image on video using 'overlay_cuda', ffmpeg video filter

which has two examples of how to initialize a cuda device, and they don't work for me either. -init_hw_device cuda=cuda is accepted without error, but then -hwaccel cuda still fails. Trying to use the hw accelerated filter scale_cuda also fails.

So how do I get the Nvidia card to decode video when it's not the only graphics adapter? I was able to decode video when only the Nvidia card was active, there "must" be a way to get to it now. I just need to know how to tell ffmpeg to use the card that is there. Since it has no problem finding the card for encoding, shouldn't it also still be able to find the card for decoding and filters? Or am I really the first person ever to have both Intel and Nvidia graphics adapters working on my system and trying to use ffmpeg with hardware acceleration?

=====================

Latest update.

I had tried the examples on the Nvidia FFmpeg transcoding guide web page, and as mentioned previously I still got errors. I did a cut and paste from that web page to my command window, and ffmpeg still did not find the correct graphics adapter.

However, I do have a work-around. I don't particularly like it, but it works.

First: Windows (10) does not understand the concept of a graphics adapter that doesn't have a monitor attached to it. Even though graphics processors (specifically Nvidia) are available without the actual video output and are used in supercomputers and elsewhere to do high speed stream processing, Windows will not let you access the card settings if there is no monitor attached. The Nvidia control center also will not allow you to access any of the card's settings, and you can't set processor affinity.

So I connected a second monitor, and set up the Nvidia card as the primary.

Now ffmpeg -hwaccel cuda works the first time. The command I was using before:

ffmpeg -hide_banner -hwaccel cuda -i "input.avi" -c:a copy -ac 1 -c:v h264_nvenc -preset hq -movflags faststart -qp 30 "output.mp4"

Was failing because it couldn't find the Nvidia adapter. This command now works correctly the first time and uses hardware acceleration for both decode and encode. (The audio portion is irrelevant, if I also re-encode the audio the results are the same.)

With scaling, the command was like this:

ffmpeg -hide_banner -hwaccel cuda -i "input.avi" -c:a copy -ac 1 -c:v h264_nvenc -preset hq -vf "scale=640:480" -movflags faststart -qp 30 "output.mp4"

This works. However:

ffmpeg -hide_banner -hwaccel cuda -i "input.avi" -c:a copy -ac 1 -c:v h264_nvenc -preset hq -vf "scale_cuda=640:480" -movflags faststart -qp 30 "output.mp4

Fails with

Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0

I was able to get around this, by rearranging things in what seems to be to be an unnecessarily convoluted syntax.

ffmpeg -hide_banner -hwaccel cuvid -hwaccel_output_format cuda -i "input.avi" -c:a aac -b:a 192k -ar 48000 -vf "scale_cuda=856:480" -c:v h264_nvenc -preset hq -movflags faststart -qp 26 "output.mp4"

Having to specify the output format twice seems weird, but Task Manager shows near 100% Video Decode activity, and the time it takes to do this indicates to me that the scale_cuda filter is being used.

I don't particularly like having to use a second monitor (If VirtualDub worked properly I probably wouldn't have to), but I'm willing to live with it. It appears that if you have two different video cards and you want to use hardware acceleration on one of them it has to be the primary.

I haven't tested if Intel QSV is still accessible, nor have I tried switching the order of the graphics adapters back to completely verify the source of the problem, and I'm not really planning to do so (unless some of you think that would be useful). I get the definite impression that few people, if any, have tried to get both an Nvidia and an Intel adapter to provide hardware video acceleration on the same system. I will try to access QSV to see if using both accelerators is an improvement.

I can live with the weird command line to get the cuda filters to work, but if anyone knows a better way to do it I think it would be helpful to post it here for future reference if anyone else runs into a similar problem. None of the examples of using cuda accelerated filters that I've found on any of the many web sites I've read worked exactly as given.

==================

The good news:

It's possible to use both Nvidia and QSV hardware in at least some cases.

This command works:

ffmpeg -hide_banner -hwaccel dxva2 -i "input.avi" -c:a copy ^
  -c:v h264_qsv -vf "crop=1920:1044:0:0" -preset veryfast -profile:v high -level 4.1 -qp 22 "output.mp4"

Task Manager says Nvidia is decoding the input, and GPU-Z says Intel is also active, so it must be doing the encoding.

The bad news: I can't figure out a way to use both a CUDA filter and a standard filter in the same process.

This does not work:

ffmpeg -hide_banner -hwaccel cuvid -hwaccel_output_format cuda -i "input.avi" -c:a aac -b:a 192k -ar 48000 -vf "scale_cuda=856:480,crop=1280:696:0:24" -c:v h264_nvenc -preset hq -movflags faststart -qp 30 "output.mp4"

Reversing the order of scale_cuda and crop (with appropriate adjustments to the numbers) also does not work. There are errors about not being able to transfer the processing stream.

I will try the changes in the latest comment, but I think I may have tried it before and something didn't work. But I will check again.

In my web searches I have not found an example of 'mixed' filters.

I did see "-crop" and "-resize" on the Nvidia ffmpeg trancode web page similar to this:

–crop 0x36x0x0 –resize 1280x696

Once again, I did a cut and paste from the Nvidia web page to my command window and it didn't work. If there is a way to invoke the Nvidia command for these options that has been tested and found to actually work I would really like to see it.

mashuptwice
  • 640
  • 3
  • 18
Bart Lederman
  • 41
  • 1
  • 3
  • Can you please add the complete command line? My guess is: What you are trying to do is "video transcoding". Video transcoding requires both decoding and encoding. I think you need to set both decoding acceleration device and encoding acceleration device. Check [NVIDIA FFmpeg Transcoding Guide](https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/). – Rotem Feb 08 '21 at 20:45
  • I tried the instructions in the FFmpeg Trancoding Guide and as mentioned before, they didn't work. I did a cut and paste from that page to my command window. – Bart Lederman Feb 09 '21 at 11:03
  • Your question is very detailed, but I don't think I am able to post an answer. There are few things to consider: read [3 Detailed description](https://ffmpeg.org/ffmpeg.html#toc-Detailed-description). There are **three** main stages: **decoding**, **filtering** and **encoding**. Each stage may (or may not) use hardware acceleration. The syntax for accelerating the decoding the filtering and the encoding may be different. Remember that before `-i` applies decoding, and after `-i` applies encoding. – Rotem Feb 09 '21 at 12:12

2 Answers2

0

This is not really an answer, but I've gotten a little further.

This command line will get the Nvidia card to both decode and encode.

ffmpeg -hide_banner -hwaccel dxva2 -i "input.avi" -c:a aac -b:a 192k -ar 48000 -c:v h264_nvenc -preset hq -profile:v high -level 4.1 -qp 36 "output.mp4"

At least on my Windows 10 system, the GPU display in Task Manager shows hardware acceleration being used on both Decode and Encode. Why the decode has to be dxva2 and default to Nvidia is something nobody is willing to explain, but it works.

Filters are another problem. I have to Intel CPU systems with QSV, one of which has an Nvidia card. Nobody anywhere is willing or able to explain how to use both an accelerated filter and a non-accelerated filter at the same time. Apparently, it either can't be done, or else the people who know how to do it are anti-social and won't pass on the information.

Even just using some Nvidia filters by themselves doesn't work. On the web pages I referenced in my first post, Nvidia says there is a "-crop" filter. I have finally been able to get it to work, but only if it is by itself. It is not possible to have it at the same time as a non-Cuda filter. In addition, I can't find a page that even explains the argument to -crop: if anyone knows how to set the height and width of the crop for this command, that would be a big improvement.

Thanks.

Bart Lederman
  • 41
  • 1
  • 3
0

I did finally get one example of crop to work, but with one error.

ffmpeg -hwaccel cuvid -c:v h264_cuvid -crop 10x100x10x100 -i input.mp4 -c:v h264_nvenc output.mp4

results in a cropped output, but with the error:

WARNING: defaulting hwaccel_output_format to cuda for compatibility with old commandlines. This behaviour is DEPRECATED and will be removed in the future. Please explicitly set "-hwaccel_output_format cuda".

The problem now is that there is absolutely no way to add that without getting errors or causing the command to reject the crop filter. I've tried putting that option everywhere I can find in the command line and the only way to get the operation to work is to leave out the option and hope that Nvidia doesn't depreciate the default.

This same command does not work for another file which is AVI, even though it's the same AVC video encoding. I can't find this documented anywhere, when i do video transcoding without the crop filter ffmpeg reads the AVI file just fine, and, as mentioned, Task Manager says the Nvideo card is doing hardware decoding of the input file. I can also transcode other AVI files that were created by the same software (VirtualDub). Both files are in the same batch file and I did a cut and paste of the command line that worked and only changed the input file name. The error message just says:

Option hwaccel (use HW accelerated decoding) cannot be applied to output url –crop -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to. Error parsing options for output file –crop. Error opening output files: Invalid argument

I can't figure out why changing the input file name would cause the command to fail.

I did find the arguments for -crop are Crop (top)x(bottom)x(left)x(right)

Bart Lederman
  • 41
  • 1
  • 3