2

I need to grab the screen pixels into a texture to perform post processing. Previously, i have been using BlitCommandEncoder to copy from texture to texture. Source texture being the MTLDrawable texture, onto my destination texture. They both have the same MTLPixelFormatBGRA8Unorm so everything works just fine.

However, now i need to use a frame buffer color attachment texture of MTLPixelFormatRGBA16Float for HDR rendering. So, when i am grabbing the screen pixels, i am actually grabbing from this color attachment texture instead of the Drawable texture. And i am getting this error:

[MTLDebugBlitCommandEncoder internalValidateCopyFromTexture:sourceSlice:sourceLevel:sourceOrigin:sourceSize:toTexture:destinationSlice:destinationLevel:destinationOrigin:options:]:447: failed assertion [sourceTexture pixelFormat](MTLPixelFormatRGBA16Float) must equal [destinationTexture pixelFormat](MTLPixelFormatBGRA8Unorm)

I don't think i need to change my destination texture to RGBA16Float format? Because that will take up double the memory. One full screen texture (color attachment) with that format should be enough for HDR to work right?

Is there other method to successfully perform this kind of copy? On openGL there is no error when copying with glCopyTexImage2D

Darren
  • 152
  • 9
  • Well, if you don't want to use 16-bit float you will need to add a render pass to convert to whatever pixel format you want to use. I am not sure I understand what you mean by "One full screen texture... should be enough for HDR". HDR is expressed in different pixel formats and bit depths, but I think you typically want at least 10 bits per color channel, so you might want to consider something like MTLPixelFormatRGB10A2Unorm. – ldoogy Mar 23 '20 at 06:39
  • @ldoogy ok will try add a render pass. I am using MTLPixelFormatRGBA16Float for rendering color range that are beyond 1.0 Unorm is clamped between 0 to 1. – Darren Mar 23 '20 at 12:49
  • I suggest you use sRGB for all your render textures, this avoids the need to double the memory for 16 bit half pixels and you will not lose precision by rendering to linear RGB. Plus blit will work. – MoDJ Mar 24 '20 at 05:30
  • @MoDJ will that work for rendering color range > 1.0? currently i have a problem with 'GL_RGBA16F' being an undeclared identifier when using GLES 2. (for Android cos this is a cross platform app) – Darren Mar 24 '20 at 09:22

1 Answers1

1

Metal automatically converts from source to destination format during rendering. So you could just do a no-op rendering pass to perform the conversion.

Alternatively, if you want to avoid boilerplate no-op rendering code, you can use the MPSImageConversion performance shader that's basically doing the same.

Frank Rupprecht
  • 9,191
  • 31
  • 56