In WebGPU you can create a render pass by defining its descriptor:
const renderPassDesc: GPURenderPassDescriptor = {
colorAttachments: [
{
view: context.getCurrentTexture().createView(),
loadValue: [0.2, 0.3, 0.5, 1],
storeOp: "store"
}
]
};
And then run it through the command encoder and start recording.
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass(renderPassDesc);
So, essentially, it appears that you need the current texture to start recording (i.e. without calling context.getCurrentTexture().createView()
you can't create the descriptor and without it you can't start the recording). But the API seems to suggest that the texture can change every frame (note that this used to be the case even months ago, when the API was different and you would be retrieving the texture from the swap chain). So, basically, it appears that you can't reuse render passes across different frames (unless of course you don't render to the swap chain, and target an offscreen texture instead).
So, the question is. In WebGPU, can you reuse the same render pass in multiple frames?
Comparison with Vulkan
My question stems from the (little) exposure I had to Vulkan. In Vulkan, you can reuse recorded resources because there is a way to know upfront how many VKImage
objects are in the swap chain; they are going to have 0-based indices such as 0
, 1
and 2
. I can't remember the exact syntax, but I remember that basically you can record 3 separate command buffers, one per VKImage
and reuse them across frames. All you have to do is query in the render loop the index of the current VKImage
and retrieve the corresponding recorded command buffer.