3

I have a source texture (480x480) that was created with mipmapped set to true (error checking removed to simply this post), and a dest texture (100x100):

// source texture
var textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.r8Unorm, width: Int(480), height: Int(480), mipmapped: true)
textureDescriptor.usage = .unknown // .shaderWrite .shaderRead
srcTexture = metalDevice!.makeTexture(descriptor: textureDescriptor)
// Dest texture
textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.r8Unorm, width: Int(100), height: Int(100), mipmapped: false)
textureDescriptor.usage = .shaderWrite
destTexture = metalDevice!.makeTexture(descriptor: textureDescriptor)

The sampler is defined:

let samplerDescriptor = MTLSamplerDescriptor()
samplerDescriptor.magFilter = .linear
samplerDescriptor.minFilter = .linear
samplerDescriptor.rAddressMode = .clampToZero
samplerDescriptor.sAddressMode = .clampToZero
samplerDescriptor.tAddressMode = .clampToZero
samplerDescriptor.normalizedCoordinates = true
textureSampler = metalDevice!.makeSamplerState(descriptor: samplerDescriptor)

I populated the src texture with an image.

Then generated the mipmaps:

let blitEncoder = metalCommandBuffer!.makeBlitCommandEncoder()
blitEncoder!.pushDebugGroup("Dispatch mipmap kernel")
blitEncoder!.generateMipmaps(for: srcTexture!);
blitEncoder!.popDebugGroup()
blitEncoder!.endEncoding()

And in the same command buffer, ran the resize kernel:

let computeEncoder = metalCommandBuffer!.makeComputeCommandEncoder()
computeEncoder!.pushDebugGroup("Dispatch resize image kernel")
computeEncoder!.setComputePipelineState(resizeImagePipeline)
computeEncoder!.setTexture(srcTexture, index: 0)
computeEncoder!.setTexture(destTexture, index: 1)
computeEncoder!.setSamplerState(textureSampler, index: 0)
let threadGroupCount = MTLSizeMake(20, 10, 1)
let threadGroups = MTLSizeMake(destTexture!.width / threadGroupCount.width, destTexture!.height / threadGroupCount.height, 1)
computeEncoder!.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount)
computeEncoder!.popDebugGroup()
computeEncoder!.endEncoding()

The compute kernel is (remember, this is not a fragment shader which would automatically know how to set the mipmap level of detail):

kernel void resizeImage(
texture2d<half, access::sample> sourceTexture [[texture(0)]],
texture2d<half, access::write> destTexture [[texture(1)]],
sampler samp [[sampler(0)]],
uint2 gridPosition [[thread_position_in_grid]])
{
float2 srcSize = float2(sourceTexture.get_width(0),
sourceTexture.get_height(0));
float2 destSize = float2(destTexture.get_width(0),
destTexture.get_height(0));
float2 sourceCoords = float2(gridPosition) / destSize;
/*+ The following attempts all produced a pixelated image
(no edges smoothed out like a fragment shader would)
half4 color = sourceTexture.sample(samp, sourceCoords);
float lod = srcSize.x / destSize.x;
float lod = 0.0;
float lod = 1.0;
float lod = 2.0;
float lod = 3.0;
float lod = 4.0;
float lod = 4.5;
float lod = 5.0;
float lod = 5.5;
*/
float lod = 6.0;
half4 color = sourceTexture.sample(samp, sourceCoords, level(lod));
destTexture.write(color, gridPosition);
}

No matter what lod is set to, I get the same exact pixelated results. Why won't the mipmapping work? Thanks for any help you can provide.

Labeno
  • 65
  • 6

1 Answers1

1

If you want to select (or bias to) a LOD, your sampler must specify a mip filter (distinct from the min or mag filter):

samplerDescriptor.mipFilter = .nearest

Using .nearest in this context will just snap to the nearest LOD and sample from it using the bilinear filtering you seem to be looking for. You can also specify .linear, which will use trilinear filtering to interpolate between the two nearest levels.

warrenm
  • 31,094
  • 6
  • 92
  • 116