2

We've been doing lots of work trying to volume render 3D cloud fields in WebGL. The approach we've taken so far is outlined here - the start position of each ray is the current position in the front face of the volume cube, and the end position is calculated from a previous pass, which encodes the xyx vales as a backface texture.

How can we extend this to work when the camera is inside the volume? Do we need to create smaller volume cubes on the fly? Can we just change the shader to start marching from the camera instead of the front face, and project onto the back of the cube?

We're not really sure where to start with this!

Thanks in advance

nrob
  • 861
  • 1
  • 8
  • 22

1 Answers1

0

Render only a single pass.

In that pass you render the back faces only. The camera position needs to be translated from world coordinates into a coordinate system that is build by the 3 axes with their sizes of the volume box you render. Your goal is to create a 4x4 matrix where the all column vectors are a vec4(...,0) and x,y,z of these vectors are defined by x,y,z-axis directions with length of the volume box. If the box is parallel to x axis, that vector is (1,0,0). If it is stretched to (2,0,0) then that is its own x-axis and that will be the column vector for column 0 in the matrix. Do so with y and z axis with their length. The last column vector in the matrix is the position of the box as vec4(tx,ty,tz,1) as this matrix then defines a coordinate system and you use it to transform the camera position into the uniform (0,0,0)-(1,1,1) box of the volume.

Create the inverse of that volumes box matrix and multiply the cam as vec4( campos, 1) from the right side to the invVolMatrix. Send the resulting vec3 as UNIFORM to shader.

Render only backfaces with (0,0,0) to (1,1,1) coordinates on their respective volBox corners - as you already did. Now you have in your shader

  1. uniform campos
  2. back face voltex coordinate
  3. you know your volbox is a unit cube in a local coordinate system with diagonal from (0,0,0) to (1,1,1)

In the shader do:

varying vec3 vLocalUnitTexCoord;   // backface interpolated coordinate
uniform vec3 LOCAL_CAM_POS;        // localised camPos

struct AABB {
    vec3 min; // (0,0,0) 
    vec3 max; // (1,1,1)
};

struct Ray {
    vec3 origin; vec3 dir;
};

float getUnitAABBEntry( in Ray r ) {
   AABB b;
   b.min = vec3( 0 ); 
   b.max = vec3( 1 );

   // compute clipping for box.min and box.max corner
   vec3 rInvDir = vec3( 1.0 ) / r.dir;
   vec3 tMinima = ( b.min - r.origin ) * rInvDir; 
   vec3 tMaxima = ( b.max - r.origin ) * rInvDir;

   // sort for nearest corner
   vec3 tEntries = min( tMinima, tMaxima );

    // find first real entry value of 3 t-distance values in vec3 container
    vec2 tMaxEntryCandidates = max( vec2( tEntries.st ), vec2( tEntries.pp ) ); 
   float tMaxEntry = max( tMaxEntryCandidates.s, tMaxEntryCandidates.t );
}

vec3 getCloserPos( in vec3 camera, in vec3 frontFaceIntersection, in float t ) {
    float useFrontCoord = 0.5 + 0.5 * sign( t );
    vec3 startPos = mix( camera, frontFaceIntersection, useFrontCoord );   
    return startPos;
}

vec4 main(void)
{
    Ray r;
    r.origin = LOCAL_CAM_POS;
    r.dir = normalize( vLocalUnitTexCoord - LOCAL_CAM_POS );

    float t = getUnitAABBEntry( r );
    vec3 frontFaceLocalUnitTexCoord = r.origin + r.dir * t;
    vec3 startPos = getCloserPos( LOCAL_CAM_POS, frontFaceLocalUnitTexCoord, t );

    // loop for integration follows here
    vec3 start = startpos;
    vec3 end = vLocalUnitTexCoord;
    ...for loop..etc...
}

Happy coding!

VisorZ
  • 526
  • 1
  • 5
  • 15