0

I am using wgpu crate in rust to render triangles in layers. The problem is that the z coordinate of the vertices seems to be ignored. The shader returns the z coordinate in the output position, and there is no further transformation applied to it. The result however is that triangles that are rendered last (last positions in the vertex buffer) are always drawn on top of the previous ones, regardless of their z coordinate.

struct VertexInput {
    @location(0) position: vec3<f32>,
}

struct VertexOutput {
    @builtin(position) clip_position: vec4<f32>,
}

@vertex
fn vs_main(model: VertexInput) -> VertexOutput {
    var out: VertexOutput;
    out.clip_position = vec4<f32>(model.position[0], model.position[1], model.position[2], 1.0);
    return out;
}

Simplest example is having 8 vertices: the first 4 represent a plane with z coordinate 0.1, the other 4 represent a plane with z coordinate 0 (with half the area of the first). In this case the second plane is rendered on top of the first one despite the z coordinate. Setting the z to negative doesn't render the vertex at all.

What did I get wrong?

Redirectk
  • 160
  • 9
  • 1
    You will need a [DepthBuffer](https://sotrh.github.io/learn-wgpu/beginner/tutorial8-depth/). – frankenapps Mar 27 '23 at 17:54
  • @frankenapps ah thank you! Does that respect transparent textures? – Redirectk Mar 27 '23 at 20:38
  • For transparent textures you will need to enable alpha blending as well (and if not using additive blending, manually sort your drawcalls by z coordinate as well) – swiftcoder Mar 27 '23 at 21:07
  • @swiftcoder Can you please elaborate? Sorting by z is too expensive in my case (too many polygons + having to copy the new sorted data to the device). Depth buffer seems cheaper at first, but it's not clear to me which alpha blending mode would just work with a depth buffer without requiring extra manual work – Redirectk Mar 27 '23 at 21:27

1 Answers1

2

There are many ways to handle layering of objects but the simplest and default for GPU-accelerated graphics is to attach a depth buffer and configure depth testing.

Does that respect transparent textures?

Displaying transparent objects in general is hard in the triangle-rasterization world. (It's trivial in a raytracer.) You cannot just make it work by doing things properly; the best practical solution depends on the details of your problem.

  • If your transparent objects never occupy the same part of the screen as each other, then it is sufficient to draw all the transparent objects after the opaque objects. This ensures that each transparent object is either drawn after what is behind it, or is fully obscured by an opaque object.

  • If your transparent objects are glowing, then you can use additive blending which doesn't care what order things are drawn in, together with drawing them after the opaque objects.

  • If you don't mind an approximately correct result then you can use “order-independent transparency” (OIT) techniques.

  • If none of the above apply then you must sort the triangles before you draw them.

Kevin Reid
  • 37,492
  • 13
  • 80
  • 108
  • Thank you for the answer. I think I have no choice but to sort the triangles – Redirectk Mar 29 '23 at 18:28
  • Unless you have a lot of mutually-intersecting objects, you can usually get away with sorting the objects by depth, rather than the individual triangles – swiftcoder Apr 05 '23 at 14:05