0

I'm using a texture to preprocess some positions data on the gpu for a spring physics simulation. In the program a viewer can click a soft object composed of springs and it will disappear and a different springy object will appear in its place. In order to make the object disappear and a new one appear I need to access during runtime the texture holding the preprocessed positions, alter a part of it, and place it back into the gl context.

Right now I am reading out the positions texture using gl.readPixels, however instead of seeing the stored float values of the positions I see a variety of rgba values ranging from 0-255. How do I access the float values I've stored in this buffer?

var pixels = new Uint8Array(width * height * 4);

var gl = renderer.context;
gl.bindFramebuffer(gl.FRAMEBUFFER, gpuCompute.getCurrentRenderTarget( positionVariable ).texture.__webglFramebuffer);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);

The above code produces an array of RGBA 0-255 values - how do I access the values as the position floats I've stored in this texture?

I've also written up this alternate version, but I am unsure how to refer to the texture in the gl context (**see starred comment):

var pixels = new ArrayBuffer(width * height * 4);
var internalFormat;
var gl = renderer.context;
gl.bindTexture(gl.TEXTURE_2D, **how do I know my texture ID?** );
gl.getTexLevelParameteriv(gl.TEXTURE_2D, 0, gl.TEXTURE_COMPONENTS, internalFormat); // get internal format type of GL texture

gl.getTexImage( gl.TEXTURE_2D, 0, internalFormat, gl.UNSIGNED_BYTE, pixels );

three.js v80

gromiczek
  • 2,970
  • 5
  • 28
  • 49

1 Answers1

0

Likely you have RGBA texture with 8 bits on each channel. Since bit manipulation is not available in GLSL ES 2.0 you could pack float in 4 colors and unpack value after gl.readPixels. In three.js you could find packing code in packing.glsl ShaderChunk.

To render into floating point texture in WebGL 1.0 you need to have WEBGL_color_buffer_float extension enabled, but according to WebGLStats this extension is not widespread.

Also, if all computations are done in vertex shader, another option is encoding your float as depth using WEBGL_depth_texture extension. It could give you 16bit float (or more).

Community
  • 1
  • 1
Ramil Kudashev
  • 1,166
  • 3
  • 15
  • 19
  • I initially create this texture in a fragment shader to do the calculations and then pass it to a second vertex/frag shader pair to use the information to create my simulation. It works great. When I intercept it between these two passes, shouldn't it have all the float info in it already - meaning can I unpack it in the form it's already in? Or is this related to that limitation in GLSL ES 2.0 you described? Thanks! – gromiczek Oct 01 '16 at 12:55
  • Limitation is related to rendering into textures with floating point values. If you do something like `gl_FragColor.rgba = ${my_float}` with default framebuffer (with RGBA8 color) - you write same value into each of four 8-bit channels. If one-channel precision is enough - there is no reason to pack anything. You should get the same value with both reading pixel value on CPU and sampling it in next pass.[gl.readPixels](https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels) has parameter `type`, which could help to handle these bits of data in the way you want. – Ramil Kudashev Oct 02 '16 at 08:48