0

I understand gstreamer is for building complex media pipelines and it takes care of the format negotiation and provides an abstraction for underlying implementation so that I can use the elements without needing to know how it is implemented. As such, it can provide accelerated elements that have implementations that can work on multiple offload devices.

OpenVX similarly allows you to construct a compute graph with nodes that are implemented on different accelerators. OpenVX solely focuses on Computer Vision while gstreamer is much more broader.

So if they are achieving similar goals, why have two different frameworks? Why not just use gstreamer?

  • 1
    My guess: Computer vision and media streaming are two quite different things? While gstreamer might seem broader from its description, nowhere does it say it handles computer vision... so wouldn't that be a major reason? If you need computer vision technology, look at OpenVX, if you need to set up media streaming, look at gstreamer? – Lasse V. Karlsen Nov 15 '21 at 17:13
  • Let's say I have a sample Computer Vision algorithm that is composed of 3 CV kernels that run on different accelerators. I could construct an OpenVX graph with the 3 OpenVX nodes or I could put together a gstreamer pipeline with the CV kernels being exposed as gstreamer elements. In both scenarios, I can specify what accelerator device each kernel would run on and because of pipelining, I can have multiple stages running on different data at the same time. What advantage from constructing a CV algo point of view, am I deriving from using OpenVX instead of gstreamer? – sarat poluri Nov 15 '21 at 18:48

0 Answers0