2

Given a WebGL scene (created from THREE.js), how would you go about accessing the floating point values (as an array of data outside of the WebGL context) from the DEPTH_ATTACHMENT given the framebuffer has been bound to texture using framebufferTexture2D.

I've gathered one solution thus far which is to render the scene to a texture target using a custom shader override which accesses the depth texture information and then encodes it to RGB format. The code used is very similar to this THREE.js example found here: Depth-Texture-Example.

#include <packing>

varying vec2 vUv;
uniform sampler2D tDiffuse;
uniform sampler2D tDepth;
uniform float cameraNear;
uniform float cameraFar;

float readDepth (sampler2D depthSampler, vec2 coord) {
    float fragCoordZ = texture2D(depthSampler, coord).x;
    float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
    return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}

void main() {
    vec3 diffuse = texture2D(tDiffuse, vUv).rgb;
    float depth = readDepth(tDepth, vUv);
    gl_FragColor.rgb = vec3(depth);
    gl_FragColor.a = 1.0;
}

Once this has rendered I can then use readPixels to read the specific pixels into an array. However, this option has incredibly low precision restricted to 256 discrete values given vec3(float) = vec3(float, float, float). Is there a way to get higher precision out of this specific method or an alternative?

Ultimately what I want is access to the depth buffer as an array of floating point values outside of the WebGL context and in an efficient manner. I have a custom rasterizer that can create a rather good depth buffer but I don't want to waste any time redoing steps that are already done.

Andy
  • 134
  • 8

1 Answers1

2

One possibility would be to encode the 24 significant bits of a 32 bit IEEE 754 floating point value to a vec3:

vec3 PackDepth(float depth)
{
    float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
    vec4 encode    = fract(depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0));
    return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}

The R, G and B color channels can be decoded to a depth in range [0.0, 1.0] like this:

depth = (R*256.0*256.0 + G*256.0 + B) / (256.0*256.0*256.0 - 1.0);
Rabbid76
  • 202,892
  • 27
  • 131
  • 174