5

I'm using ffmpeg's libav to decode video files on Mac. For supported codecs, it says it can use the Mac VideoToolbox framework to hardware-accelerate the decoding. Can I get the result of that decode directly as a Metal or CoreVideo buffer or texture, in GPU memory? My plan is to process it with compute shaders before sending it to the screen and I'd like to maximize framerate by removing CPU<->GPU transfers.

Is there an example of doing this anywhere?

GaryO
  • 5,873
  • 1
  • 36
  • 61
  • Have you seen [this](https://developer.apple.com/videos/play/wwdc2020/10090/)? VT explanations begin at roughly 14:00 into the video, and later on it ties in with the use of Metal. – FiddlingAway Jan 14 '23 at 11:44
  • That's a good walkthrough of the Mac APIs, so it would be useful to someone doing MacOS-only work using only native MacOS, but unfortunately has no references to ffmpeg at all. – GaryO Jan 14 '23 at 11:47
  • 1
    Have you ever checked vlc or mpv's source code? If I remember correctly, mpv's codec backend is thoroughly written in ffmpeg, and mpv implements no copy-back hardware decoding. So there should be some clues to solve your problem, I guess. – Mark Miller Jan 22 '23 at 17:57

0 Answers0