I understand gstreamer is for building complex media pipelines and it takes care of the format negotiation and provides an abstraction for underlying implementation so that I can use the elements without needing to know how it is implemented. As such, it can provide accelerated elements that have implementations that can work on multiple offload devices.
OpenVX similarly allows you to construct a compute graph with nodes that are implemented on different accelerators. OpenVX solely focuses on Computer Vision while gstreamer is much more broader.
So if they are achieving similar goals, why have two different frameworks? Why not just use gstreamer?