15

Some of my vertex attributes are single unsigned bytes, I need them in my GLSL fragment shader, not for any "real" calculations, but for comparing them (like enums if you will). I didnt find any unsigned byte or even byte data type in GLSL, so is there a way as using it as an input? If not (which at the moment it seems to be) what is the purpose of GL_UNSIGNED_BYTE?

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
l'arbre
  • 719
  • 2
  • 10
  • 29
  • You could pass 32-bit bitfields (4 8-bit values in one int), but it would require an extra operation (bitwise 'and') when accessing them. – John B. Lambe Apr 07 '23 at 03:50

1 Answers1

19

GLSL doesn't deal in sized types (well, not sized types smaller than 32-bits). It only has signed/unsigned integers, floats, doubles, booleans, and vectors/matrices of them. If you pass an unsigned byte as an integer vertex attribute to a vertex shader, then it can read it as a uint type, which is 32-bits in size. Passing integral attributes requires the use of glVertexAttribIPointer/IFormat (note the "I").

The vertex shader can then pass this value to the fragment shader as a uint type (but only with the flat interpolation qualifier). Of course, every fragment for a triangle will get the same value.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • This is sad. Are they thinking about changing this? – Earl of Lemongrab May 27 '20 at 10:45
  • Why is that sad? GPUs are heavily optimized for floating point calculations. They trade doing many different things slowly for doing a single task but fast, so what you lose in memory size is usually more than made up in computation speed. (Even on a CPU, your registers and chaches are optimized for whole words (32 or 64 Bits)). – iliis Dec 14 '20 at 15:39
  • 3
    I'd hazard a guess that even though the byte (8 bits) will be received as an unsigned int (32 bits) in your shader, the very fact that you send the data up as a byte is a good gain anyway as it saves sending an extra 3 bytes. When you've got hundreds of instanced transforms this can add up. At least that's what I'd expect is the case. Not sure if there'd be any gain if using shared CPU/GPU memory though. –  Jul 03 '21 at 19:20