I'm attempting to translate my raw javascript code for image processing into GLSL in webgl2 to achieve performance gains. simply explained, this is my approach:
per frame: 1)set camera stream video as texture 2)apply some effects in first frame buffer 3)set result as texture to second frame buffer 3)apply some effects in second frame buffer 4)set result as texture to third frame buffer 5)apply some effects in third frame buffer ... last step) set null frame buffer and draw to canvas with final effects.
note: effects are progressive and based on finished result of previous effect
my question is, i want my fragment shader to write to an array of data/metadata other than the standard fragmentOutput, to be used in next frame buffer for certain operations.
so technically, i want my first shader-pass to save and array, and second shader-pass to use these saved values. i'm looking for what kind of structure/data/object to use for this and if possible, how to use it
code goes something like this:
//-- Load video frame and snapshot
refresh_texture();
//-- use first buffer to draw
GL.bindFramebuffer(GL.FRAMEBUFFER, framebuffers[0]);
GL.uniform1f(_stageNumber, 0);
GL.drawElements(GL.TRIANGLES, 6, GL.UNSIGNED_SHORT, 0);
////////////////////////////////////////////////////////////////////////////////
// for the next draw, use the texture we just rendered to.
GL.bindTexture(GL.TEXTURE_2D, textures[0]);
GL.uniform1f(_stageNumber, 1);
//-- use second buffer to draw
GL.bindFramebuffer(GL.FRAMEBUFFER, framebuffers[1]);
GL.drawElements(GL.TRIANGLES, 6, GL.UNSIGNED_SHORT, 0);