Consider this the complete form of the question in the title: Since OpenCL may be the common standard for serious GPU programming in the future (among other devices programming), why not when programming for OpenGL - in a future-proof way - utilize all GPU operations on OpenCL? That way you get the advantages of GLSL, without its programmatic limitations.
-
4That's kind of like asking "What is the point of Safari when there is Chrome?" :P – Sasha Chedygov Sep 10 '10 at 21:26
-
The main point is that OpenCL is not just a variant of GLSL; it is richer programmatically and more powerful in management. – j riv Sep 10 '10 at 21:32
-
There's still plenty of fixed functionality in OpenGL 4.3 pipeline and the programmability only concerns specific shaders that are connected together on that pipeline. OpenCL would allow expressing the entire functionality in program code but - I guess - the question here really becomes: would it be slower that way? – Tronic Jun 27 '13 at 20:54
4 Answers
GLSL is OpenGL Shading Language. It is intended, originally, for controlling the graphics pipeline.
OpenCL, on the other hand, is the Open Computing Language. It does not control graphics, but rather computation.
The two technologies are targetting different capabilities and functionality.
That being said, moving forward, they may be very little reason to use GLSL for computation purposes. However, as of today, more vendors fully support GLSL than OpenCL, so it is still useful for computation purposes even though it is limited as that is not its core purpose, at least right now.

- 554,122
- 78
- 1,158
- 1,373
-
2Yes, I meant without considering the stated purposes GLSL and OpenCL target, OpenCL appears to be giving the advantages of GLSL in a more powerful interface. Is there something that OpenCL can not do that GLSL does and is there something OpenCL will do in a worse manner? – j riv Sep 10 '10 at 21:35
-
1@Lela: For pure computational purposes, right now, the only real advantage (to my knowledge) is one of deployment and vendor support - GLSL is better supported in the wild (hence my last paragraph). Otherwise, for pure computation, OpenCL is really nicer than GLSL IMO. – Reed Copsey Sep 10 '10 at 21:40
-
Well yeah. I should had stressed more the 'future-proof' I mentioned. The main idea of this question centers around programming for the future, while most indications show OpenCL will become a standard widely supported. – j riv Sep 10 '10 at 21:42
In the future OpenCL might be able to replace GLSL. In the meantime there are still some issues with OpenGL interop, at least with the most important (NVidia/ATI) implementations.
OpenCL will not completely replace OpenGL, though. OpenGL does a whole lot more when it comes to raster graphics. The only raster graphics primitives in OpenCL are textures/images and it can't render graphics at all.

- 1,238
- 11
- 24
-
What do you mean by "cannot render"? As far as I know, you could implement all of the OpenGL pipeline with OpenCL, except for the framebuffer (displaying stuff on screen). That doesn't mean that you couldn't still render to a texture or perhaps use OpenGL *only* for allocating a framebuffer that could then be used in OpenCL. – Tronic Jun 27 '13 at 20:49
-
3Yes, you could do that, but it would be horribly slow. GPUs have dedicated and highly optimized hardware for blitting, rasterization, blending, tesselation and various other typical graphics operations. OpenCL doesn't provide any support for this hardware. That is what I meant. – dietr Jul 10 '13 at 13:09
GLSL is a "shading language". It is used for 3D rendering and has special datatypes particularly useful for that purpose (e.g. length-4 vectors and rank-4x4 matrices). The vertex and fragment shaders sit in a well-defined location inside the rendering pipeline, and they are automatically triggered on data flowing through this pipeline. The shaders also have direct access to the tranformation and projection matrices of the 3D pipeline.
OpenCL is a "compute language". It is not specially designed for the compute tasks we see in 3D rendering, but is rather a subset of C. OpenCL does have data types similar to the vectors and matrices in GLSL (float4, float16), but they are less convinient to use. Also you don't have a graphics context (which can be an advantage or a disadvantage), and the OpenCL kernel does not reside inside a 3D rendering pipeline.
If you want compute modules that are plugged into the 3D rendering pipeline and are triggered by the rendering pipeline, use GLSL. If you want general computation on the GPU outside the 3D rendering pipeline use OpenCL.
That does not mean OpenCL cannot be used to render 3D graphics. It can. In fact, you can implement your own pipeline solely in OpenCL, and then copy your drawing to the framebuffer. But if you just want to draw some 3D graphics, duplicating all the work by SGI, Nvidia, Intel and AMD engineers is probably not worth the hassle. Then it is easier to just use GLSL and get the shader plugged into a ready-to-use and fully performant OpenGL pipeline. Just consider that it was a major undertaking to write Mesa, the open-source OpenGL implementation.

- 1,124
- 7
- 7
@dietr: let me Eco you here! the CL/GL interoperability can harm considerably the overall performance, specially when handling a big number of object buffers (>100, for example). The context changing overhead and the lack of support for structures of buffer or pointers to buffers will just kill my apps, in such a way that I'm seriously considering the use of geometry shader to replace my opencl code. And don't fool yourself, a complete replacement of the entire opengl by opencl would require a whole recoding process which would not only be real long but also not that advantageous. Besides, as well said by the folks up-here, opengl does much more than just pipeline computation. I would say that if you intend to use CL/GL iterop, try to rewrite your code on vertex/geometry shader if you can.

- 55
- 4
-
I assume performance depends very much on what you do, and how. Typically you won't share hundreds of buffers, but only a handful at maximum. – dietr Apr 15 '12 at 14:05