I am mixing ray casting and standard rasterization in my graphics pipeline, and I need to generate a proper depth buffer from ray casting that is interoperable with rasterization.
I am aware that, as a previous answer suggests, I could get the world position of the ray cast intersection and then transform it to clip space with the same matrix I use for rasterization, but in my case this would require a per-pixel matrix multiplication operation, and to conserve precious little compute power I want to avoid this.
I know there must be a way to compute a proper depth value from a ray cast using nothing but vector math, but I'm not clear on the math that generates it in first place (i.e. the inner workings of perspective projection) and therefore am not sure how I would go about deriving a depth buffer without a projection matrix.