3

I recently read this list and I noticed that almost everything I studied from the OpenGL Red Book is considered deprecated. I'm talking about pixel transfer operations, pixel drawings, accumulation buffer, Begin/End functions (!?), automatic mipmap generation and current raster position.

Why did they flag these features as deprecated? Will it be okay to still use them? What are the workarounds?

Pilpel
  • 231
  • 1
  • 7
  • 11
  • At least the part about immediate mode ("begin/end functions") has been answered on gamedev.stackexchange.com: [Why does OpenGL >= 3 only allow VBOs?](http://gamedev.stackexchange.com/questions/21733/why-does-opengl-3-only-allow-vbos) –  Aug 19 '12 at 14:58
  • The simplest way to start learning modern OpenGL is to get into OpenGL ES 2.0. In that, they actually *removed* these deprecated functions and force you to use the more modern API. – Cornstalks Aug 19 '12 at 14:58
  • 1
    Welcome to the world of software development. Everything I learned about Zilog Z80 and CP/M in 1980 was useless when the first PC arrived. So we had to learn the 8088 and PC-DOS instead. Which was useless with 16-bit Windows, which was useless for 32-bit Windows, etc. That's the way it works; just continue learning! – Bo Persson Aug 19 '12 at 15:00
  • I don't plan on coding for embedded systems so OpenGL ES won't help. – Pilpel Aug 19 '12 at 15:09
  • It is not considered deprecated. It is *removed*. There's a difference. – Nicol Bolas Aug 19 '12 at 15:10
  • @Cornstalks: The functionality *is* removed in core OpenGL contexts. – Nicol Bolas Aug 19 '12 at 15:10
  • @NicolBolas Indeed, it is removed *in core profiles*. In the "compatibility profile", all these features are alive and kickin', and aside from quite recent Mac OS X, I don't know of a single platform that does OpenGL X.Y core but not X.Y compatability. As far as I know, you have to explicitly request a core profile context on pretty much every platform. And of course this will remain true for as programs using any of these features are of interest. –  Aug 19 '12 at 15:13
  • @delnan: My point is that the functionality **is not deprecated**. In core it's removed, and in compatibility it's not marked deprecated. – Nicol Bolas Aug 19 '12 at 15:25
  • @BoPersson When new hardware and system comes it's natural it's programmed differently. But in case of the OpenGL it's a breaking change. Deprecating/removing basic features like `glBegin` or basic matrix operations like `glOrtho` `glLoadIdentiy` is something like removing `printf` from the C API, just because it's faster to use `puts`. I would have rather changed the old API to have `glVbBegin` or something like that, to make vertex buffers and matrixes for you. Old code would be easier to port... But no, you have to write less intuitive, and more boilerplate code, pfff... – Calmarius Dec 03 '13 at 09:09
  • @Calmarius the semantics are too different to make this ("glVbBegin", for example) possible *and* efficient. – Display Name Dec 21 '13 at 23:05
  • @SargeBorsch As far as I know display lists were turned into vertex buffers, when possible. Now I got accustomed a bit with the new way. But it would be still nice to have some sane defaults to reduce the amount of setup code. – Calmarius Dec 22 '13 at 17:32
  • @BoPersson: you are exaggerating. The foundations don't really change. – Yakov Galka Sep 13 '14 at 16:01

2 Answers2

12

In my opinion its for the better. But this so called Immediate Mode is indeed deprecated in OpenGL 3.0 mainly because its performance is not optimal.

In immediate mode you use calls like glBegin and glEnd. So the rendering of primitives depends on the program's commands, OpenGL can't advance until it gets the appropiate command from the CPU. Instead you can use buffer objects to store all your vertices and data. And then tell OpenGL to render its primitives using this buffer with commands like glDrawArrays or glDrawElements or even more specialized commands like glDrawElementsInstanced. While the GPU is busy executing those commands and drawing the buffer to the target FrameBuffer (basically a render target). The program can go off and issue some other commands. This way both the CPU and the GPU are busy at the same time, and no time is wasted.

Not the best explanation ever, but my advice: try to learn this new rendering pipeline instead. It's superior to immediate mode by far. I recommend tutorials like:

http://www.arcsynthesis.org/gltut/index.html

http://www.opengl-tutorial.org/

http://ogldev.atspace.co.uk/

Literally try to forget what you know so far, immediate mode is long deprecated and shouldn't be used anymore, instead, focus on the new technology ;)

Edit Excuse me if I used 'intermediate' instead of 'immediate', I think its actually called 'immediate', I tend to mix them up.

Invalid
  • 1,870
  • 1
  • 16
  • 22
  • Which book do you recommend the most? – Pilpel Aug 19 '12 at 16:47
  • The first link (arcsynthesis) got me started with OpenGL 3.0+, note it is not entirely finished yet, but I haven't come across a better book. The other two links just have some nice and short tutorials on specific subjects (especially the third link, it has some tutorials on advanced subjects like shadow mapping or even deferred shading). – Invalid Aug 19 '12 at 16:52
9

Why did they flag these features as deprecated?

First, some terminology: they aren't deprected. In OpenGL 3.0, they are deprecated (meaning "may be removed in later versions"); in 3.1 and above, most of them are removed. The compatibility profile brings the removed features back. And while it is widely implemented on Windows and Linux, Apple's 3.2 implementation only implements the core profile.

As to the reasoning behind the removal, it depends on which feature you're talking about. We can really only speculate as to why the ARB any specific feature:

pixel transfer operations

Pixel transfer operations have not been removed. If you're talking about glDrawPixels, that is a pixel transfer operation, but it is one pixel transfer. Not all of them.

Speaking of which:

pixel drawings

Because it was a horrible idea to begin with. glDrawPixels is a performance trap; it sounds nice and neat, but it performs terribly and because it's simple, people will try to use it.

Having something that is easy to do but terrible in performance encourages people to write terrible OpenGL applications.

accumulation buffer

Shaders can do this just fine. Better in fact; they have a lot more options than accumulation buffers cover.

Begin/End functions (!?),

It's another performance trap. Immediate mode rendering is terribly slow.

automatic mipmap generation

Because it was a terrible idea to begin with. Having OpenGL decide when to do a heavyweight operation like generate mipmaps of a texture is not a good idea. The much better idea the ARB had was to just let you say, "OK, OpenGL, generate some mipmaps for this texture right now."

current raster position.

Another performance trap/bad idea.

Will it be okay to still use them?

That's up to you. NVIDIA has effectively pledged to support the compatibility profile in perpetuity. Which means that AMD and Intel probably will have to as well. So that covers Windows and Linux.

On MacOSX, Apple controls the GL implementations more rigidly, and they seem committed to not supporting the compatibility profile. However, they seem to have little interest in advancing OpenGL, since they stopped with 3.2. Even Mountain Lion didn't update the OpenGL version.

What are the workarounds?

Stop using performance traps. Use buffer objects for your vertex data like everyone else. Use shaders. Use glGenerateMipmap.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • 7
    I'd like to point out, that once you got used to using modern OpenGL you no longer want to go back. Once you've experienced the ease of fragment shaders, even for simple things, you never want to touch `glTexEnv` ever again. Once you've used vertex shaders for texture coordinate generation, you never want to touch `glTexGen` ever again. Immediate mode and display lists are cumbersome to work with and give people a wrong feeling as of working with a scene graph. The OpenGL matrix stack always lacked the one function you needed, so you implemented the whole matrix stuff yourself anyway. – datenwolf Aug 19 '12 at 15:33