0

I try to perform MSAA on a framebuffer, And in the standalone version where i draw a cube to the framebuffer and blit that framebuffer to the canvas it works like a charm:

var gl = canvas.getContext("webgl2", {
    antialias: false
});

const frambuffer = gl.createFramebuffer();
const renderbuffer = gl.createRenderbuffer();

gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, gl.getParameter(gl.MAX_SAMPLES), gl.RGBA8, this.width, this.height);
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, renderbuffer);

.. Prepare scene

gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);

.. Draw scene

gl.bindFramebuffer(gl.READ_FRAMEBUFFER, frambuffer);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null);
gl.clearBufferfv(gl.COLOR, 0, [1.0, 1.0, 1.0, 1.0]);
gl.blitFramebuffer( 0, 0, canvas.width, canvas.height,
                    0, 0, canvas.width, canvas.height,
                    gl.COLOR_BUFFER_BIT, gl.LINEAR);

But when i do this in my engine with a deferred pipeline the blit is performed but the MultiSample (MSAA) not. The difference i can think of is that i am there writing an image drawn to a quad to the framebuffer and in the working example a cube.

as requested,In the case it is not working the setup is like this:

var gl = canvas.getContext("webgl2", {
    antialias: false
});

.. Load resources ..


.. Prepare renderpasses ..

shadow_depth for every light 
deferred scene 
ssao
shadow for first light
convolution on ssao and shadow
convolution 
uber for every light
tonemap 
msaa

..

.. draw renderpasses ..

deferred scene 
ssao
shadow for first light
convolution on ssao and shadow
convolution 
uber for every light
tonemap 

...

const frambuffer = gl.createFramebuffer();
const renderbuffer = gl.createRenderbuffer();

gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, gl.getParameter(gl.MAX_SAMPLES), gl.RGBA8, this.width, this.height);
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, renderbuffer);

gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);

..draw tonemap of scene to quad


gl.bindFramebuffer(gl.READ_FRAMEBUFFER, frambuffer);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null);
gl.clearBufferfv(gl.COLOR, 0, [1.0, 1.0, 1.0, 1.0]);
gl.blitFramebuffer( 0, 0, canvas.width, canvas.height,
                    0, 0, canvas.width, canvas.height,
                    gl.COLOR_BUFFER_BIT, gl.LINEAR);

Kaj Dijkstra
  • 327
  • 1
  • 4
  • 14
  • You've shown us the "good" case but you have not shown us the "bad" case. – prideout Dec 11 '19 at 17:17
  • I edited my initial question, The whole case is way to complex to describe, I believe the only important difference is that the scene is not drawn in 3d, But a quad is drawn with an image of the scene drawn onto that quad and that quad onto the framebuffer. – Kaj Dijkstra Dec 12 '19 at 12:00

1 Answers1

1

renderbufferStorageMultisample needs to be applied only for the framebuffer object that has the initial 3D content. When doing post-processing, multisampling does not have an effect because only 1 or 2 triangles being rasterized and they span the entire viewport.

prideout
  • 2,895
  • 1
  • 23
  • 25