3

On the CPU I'm gathering an array of MTLTexture objects that I want to send to the fragment shader. There can be any number of these textures at any given moment. How can I send a variable-length array of MTLTextures to a fragment shader?

Example.) CPU:

var txrs: [MTLTexture] = []
for ... {
   txrs.append(...)
}
// Send array of textures to fragment shader.

GPU:

fragment half4 my_fragment(Vertex v [[stage_in]], <array of textures>, ...) {
   ...
   for(int i = 0; i < num_textures; i++) {
      texture2d<half> txr = array_of_textures[i];
   }
   ...
}
JustSomeGuy
  • 3,677
  • 1
  • 23
  • 31
iosdev55
  • 129
  • 1
  • 6

1 Answers1

6

The array other person suggested won't work, because textures will take up all the bind points up to 31, at which point it will run out.

Instead, you need to use argument buffers.

So, for this to work, you need a tier 2 argument buffer support. You can check it with argumentBuffersSupport property on an MTLDevice.

You can read more about argument buffers here or watch this talk about bindless rendering.

The basic idea is to use MTLArgumentEncoder to encode textures you need in argument buffers. Unfortunately, I don't think there's a direct way to just encode a bunch of MTLTextures, so instead, you would create a struct in your shaders like this

struct SingleTexture
{
    texture2d<half> texture;
};

The texture in this struct has an implicit id of 0. To learn more about id, read Argument Buffers section in the spec, but it's basically a unique index for each entry in the ab.

Then, change your function signature to

fragment half4 my_fragment(Vertex v [[stage_in]], device ushort& textureCount [[ buffer(0), device SingleTexture* textures [[ buffer(1) ]]) 

You will then need to bind the count (use uint16_t instead of uint32_t in most cases). Just as a 2 (or 4) byte buffer. (You can use set<...>Bytes function on an encoder for that).

Then, you will need to compile that function to MTLFunction and from it, you can create a MTLArgumentEncoder using newArgumentEncoderForBufferAtIndex method. You will use buffer index 1 in this case, because that's where your AB is bound in the function.

From MTLArgumentEncoder you can get encodedLength, which is basically a size for one SingleTexture struct in AB. After you get that, multiply it by number of textures to get a buffer of a proper size to encode your argument buffer to.

After that, in your setup code, you can just do this

for(size_t i = 0; i < textureCount; i++)
{
    // We basically just offset into an array of SignlaTexture
    [argumentEncoder setArgumentBuffer:<your buffer you just created> offset:argumentEncoder.encodedLength * i];
    [argumentEncoder setTexture:textures[i] atIndex:0];
}

And then, when you are done encoding the buffer, you can hold on to it, until your texture array changes (you don't need to reencode it every frame).

Then, you need to bind the argument buffer to buffer binding point 1, just as you would bind any other buffer.

Last thing you need to do is to make sure all the resources referenced indirectly are resident on the GPU. Since you encoded your textures into AB, driver has no way to know whether you used them or not, because you are not binding them directly.

To do that, use useResource or useResources variation on an encoder you are using, kinda like this:

[encoder useResources:&textures[0] count:textureCount usage:MTLResourceUsageRead];

This is kinda a mouthful, but this is the proper way to bind anything you want to your shaders.

JustSomeGuy
  • 3,677
  • 1
  • 23
  • 31
  • Yeah, you can do much more involved stuff with argument buffers. It's a little bit on the nose with all of the APIs you need to use, but when you set it up, it's super flexible. – JustSomeGuy Jun 30 '21 at 03:11