With Qt 6.2 there is a whole system of "graphics API independent" infrastructure (both on tooling and on C++ side) that enables you to render things in e.g. QML without writing one bit of Vulkan, Metal, Direct3D or OpenGL.
Based on the "custom material" example, and QQuickWindow::createTextureFromImage, I was able to render the image through a custom vertex and fragment shader to screen using OpenGL and Metal without altering my code. Great success!
But... the texture creation function converts the image to RGBA8/GL_RGBA, so using e.g. 16-bit or floating point images is impossible. The QQuickWindow::createTextureFromImage documentation says:
Reimplement QSGTexture to create textures with different parameters.
And QSGTexture documentation states:
Materials that work with textures reimplement updateSampledImage() to provide logic that decides which QSGTexture's underlying native texture should be exposed at a given shader resource binding point.
The only way to see this using the provided API is by doing native graphics API calls and thus implementing my special format texture in both a QSGTexture and a QSGMaterialShader, for every graphics API I would want to support. This seems counter to the whole setup of what QRhi is supposed to provide (see e.g. here for an overview). Same for e.g. an additional texture, or a 3D texture, or anything else really, not covered by the simple vertex+fragment shader and single RGBA texture (granted they provide some support for mipmapping and anistrophic filtering.
Am I missing something? Will this improve in the future, or are we stuck actually still implementing the actual graphics API specific code for anything outside the "rendering simple UI through QML" case? I suppose this same limitation exists for the Qt 3D module; anything outside of the supported scenarios is not implementable due to the underlying QRhi API not being public?