8

The OpenGL Graphics Pipeline is changing every year. So the programmable Pipelines are growing. At the end, as an opengl Programmer we create many little programms (Vertex, Fragment, Geometry, Tessellation, ..)

Why is there such a high specialization between the stages? Are they all running on a different part of the hardware? Why not just writing one code-block to describe what should be come out at the end instead of juggling between the stages?

http://www.g-truc.net/doc/OpenGL%204.3%20Pipeline%20Map.pdf

In this Pipeline PDF we see the beast.

user1767754
  • 23,311
  • 18
  • 141
  • 164

5 Answers5

7

In the days of "Quake" (the game), developers had the freedom to do anything with their CPU rendering implementations, they were in control of everything in the "pipeline".

With the introduction of fixed pipeline and GPUs, you get "better" performance, but lose a lot of the freedom. Graphics developers are pushing to get that freedom back. Hence, more customization pipeline everyday. GPUs are even "fully" programmable now using tech such as CUDA/OpenCL, even if it's not strictly about graphics.

On the other hand, GPU vendors cannot replace the whole pipeline with fully programmable one overnight. In my opinion, this boils down to multiple reasons;

  • GPU capabilities and cost; GPUs evolve with each iteration, it's nonsense to throw away all the architecture you have and replace it overnight, instead you add new features and enhancements every iteration, especially when developers ask for it (example: Tessellation stage). Think of CPUs, Intel tried to replace the x86 architecture with Itanium, losing backward compatibility, having failed, they eventually copied what AMD did with AMDx64 architecture.
  • They also can't fully replace it due to legacy applications support, which are more widely used than someone might expect.
Marco A.
  • 43,032
  • 26
  • 132
  • 246
concept3d
  • 2,248
  • 12
  • 21
5

Historically, there were actually different processing units for the different programmable parts - there were Vertex Shader processors and Fragment Shader processors, for example. Nowadays, GPUs employ a "unified shader architecture" where all types of shaders are executed on the same processing units. That's why non-graphic use of GPUs such as CUDA or OpenCL is possible (or at least easy).

Notice that the different shaders have different inputs/outputs - a vertex shader is executed for each vertex, a geometry shader for each primitive, a fragment shader for each fragment. I don't think this could be easily captured in one big block of code.

And last but definitely far from least, performance. There are still fixed-function stages between the programmable parts (such as rasterisation). And for some of these, it's simply impossible to make them programmable (or callable outside of their specific time in the pipeline) without reducing performance to a crawl.

BenMorel
  • 34,448
  • 50
  • 182
  • 322
Angew is no longer proud of SO
  • 167,307
  • 17
  • 350
  • 455
  • Thanks for your answer, but if we could see this stages like a class with methods (virtual) it would be much easier. Is it maybe because of the backward-compatiblity as well? – user1767754 May 12 '14 at 08:23
3

Because each stage has a different purpose

Vertex is to transform the points to where they should be on the screen

Fragment is for each fragment (read: pixel of the triangles) and applying lighting and color

Geometry and Tessellation both do things the classic vertex and fragment shaders cannot (replacing the drawn primitives with other primitives) and are both optional.

If you look carefully at that PDF you'll see different inputs and outputs for each shader/

ratchet freak
  • 47,288
  • 5
  • 68
  • 106
1

Separating each shader stage also allows you to mix and match shaders beginning with OpenGL 4.1. For example, you can use one vertex shader with multiple different fragment shaders, and swap out the fragment shaders as needed. Doing that when shaders are specified as a single code block would be tricky, if not impossible.

More info on the feature: http://www.opengl.org/wiki/GLSL_Object#Program_separation

Colonel Thirty Two
  • 23,953
  • 8
  • 45
  • 85
1

Mostly because nobody wants to re-invent the wheel if they do not have to.

Many of the specialized things that are still fixed-function would simply make life more difficult for developers if they had to be programmed from scratch to draw a single triangle. Rasterization, for instance, would truly suck if you had to implement primitive coverage yourself or handle attribute interpolation. It might add some novel flexibility, but the vast majority of software does not require that flexibility and developers benefit tremendously from never thinking about this sort of stuff unless they have some specialized application in mind.

Truth be told, you can implement the entire graphics pipeline yourself using compute shaders if you are so inclined. Performance generally will not be competitive with pushing vertices through the traditional render pipeline and the amount of work necessary would be quite daunting, but it is doable on existing hardware. Realistically, this approach does not offer a lot of benefits for rasterized graphics, but implementing a ray-tracing based pipeline using compute shaders could be a worthwhile use of time.

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106