14

I'm currently doing some research on OpenGL and Shaders but I can't seem to figure out if there are any fundamental differences between blending using the glBlendMode or writing your own blend modes in a shader.

What are the reasons to choose the former or the latter? Are there any performance bottlenecks by choosing the one over the other? Or is this just a matter of personal preference?

genpfault
  • 51,148
  • 11
  • 85
  • 139
polyclick
  • 2,704
  • 4
  • 32
  • 58

4 Answers4

17

Traditional blending with glBlendFunc can't be replicated in a shader. Blending requires sampling the destination framebuffer before modifying it, which isn't something that can be done on current hardware.

Currently you can only pass along a color, and choose one of a limited selection of blending modes (glBlendFunc/glBlendEquation) which will be applied by the GPU's rasterizer before writing to framebuffer.

Tim
  • 35,413
  • 11
  • 95
  • 121
  • 2
    Does this mean that a shader can't mix two textures and apply a blend mode to them and output 1 new texture? I find that somewhat strange if people are posting Photoshop Blend Mode shaders on the web. Can you please elaborate? Thanks! – polyclick Jul 25 '12 at 08:16
  • No, you can still mix two textures in a shader, but that's not what blending does. Blending mixes the current pixel with the framebuffer. You can still achieve the same effects by other means (drawing to texture and mixing). @bartcla – Tim Jul 25 '12 at 15:52
  • Ok, so if I understand correctly: blending through the `glBlendFunc` is something totally different from blending textures in a shader? Let's say I want to mix two photos with a "multiply" blend mode. This should be done in a shader instead of using the `glBlendFunc`? Can you give an example where the `glBlendFunc` is used? – polyclick Jul 26 '12 at 09:24
  • To your first two questions, yes that is correct. Just sample both textures, and multiply them together in a shader. Blending is generally used more for transparency, for example if you want to draw a tinted window. It's for when you want to draw something transparent on top of the rest of the scene that has already been drawn. @bartcla – Tim Jul 26 '12 at 16:11
  • Things have changed a bit later though: https://www.khronos.org/opengl/wiki/Memory_Model#Texture_barrier – Trass3r Mar 09 '22 at 10:06
10

When a shader reads from a texture while it is simultaneously rendering to the same texture, the results are undefined. This is why "traditional" or fixed-functionality blending with glBlendFunc and glBlendEquation is useful.

To mix two images using traditional blending, you render the first image, set the blending mode, function, and equation, and render the second image. This is guaranteed to give correct results, and is usually the fastest way to achieve effects like transparency.

To achieve the same effect with a shader, you need to render the first image to an auxiliary texture, change shaders, and render the second image to the actual framebuffer, doing the blending as a final step in the fragment shader. This is usually slower because of the extra overhead of texture reads, and will certainly use more GPU memory for the auxiliary texture.

On modern hardware, the difference between the two techniques tends to be small.

Brian Johnson
  • 176
  • 2
  • 4
1

It is possible to simulate OpenGL blending from within a shader in three circumstances I'm aware of:

  1. You have Direct3D11 or OpenGL equivalent, and you bind the render target as a read-write texture in your pixel shader. This enables arbitrarily complex blending operations, but will not have high performance when doing simple blending, because you are bypassing your GPU's special hardware for simple blending.

  2. You have an exotic "tile-based" GPU where even simple blending is done by an "alpha shader." In this case, there's no performance difference between a simple blend in OpenGL and equivalent shader code. But it's unlikely that you have such a GPU, and OpenGL doesn't expose this functionality anyway.

  3. You sidestep the entire fixed-function hardware rasterization pipeline, and write your own as a complex of "compute shaders." If you can pull this off, something like an "alpha shader" would be part of your tile-based pipeline, but getting to that point is so much work that alpha blending would be the least of your concerns.

  • 1: Not to mention, you have to deal with explicitly synchronizing access if multiple shader invocations access the same pixel. – Nicol Bolas Jul 24 '12 at 19:13
  • one nice aspect of tiled rasterization is that typically, there exists no possibility of race conditions between pixels. but, if you were talking instead about 1. then yes you're right: the guarantee that triangle rasterization completes serially would be violated in that case. –  Jul 24 '12 at 22:49
-1

This can actually be accomplished using the gl 3.0 extension using the texture barrier function:

http://www.opengl.org/registry/specs/NV/texture_barrier.txt

Basically, after you write the background, you can the TextureBarrierNV() function to be sure texels are flushed, then you can perform a read/write operation in a single shader stage (be sure to avoid multisampling buffers because just one write operation is actually permitted)..

Leonardo Bernardini
  • 1,076
  • 13
  • 23