1

I have been trying to get a 16bit float (half-floating point) as an attribute into my GLSL vertex shader. It won't let me compile saying:

error C7506: OpenGL does not define the global type half

but my #version is 410 and so it should support half? Am I missing something obvious?

genpfault
  • 51,148
  • 11
  • 85
  • 139
ChaoSXDemon
  • 860
  • 10
  • 29
  • 3
    Edit in a [mcve]. – genpfault Jun 16 '17 at 18:04
  • 1
    You can use 16 bit half floats in vertex attributes without any issue, but you cannot use 16 bit half floats in the shaders. They will just be converted to 32 bit floats like any other non-integer attribute type. – derhass Jun 16 '17 at 18:34

2 Answers2

10

OpenGL and OpenGL ES define two concurrent types of precision.

  • Storage precision in buffers
  • Minimum computational precision used in shaders.

Storage precision is defined by your vertex attribute upload, such as GL_FLOAT or GL_HALF_FLOAT. This will be the precision used to store the data in memory.

Usage precision is defined in the shader as highp (at least 32-bit), mediump (at least 16-bit), and lowp (at least 9-bit). These are minimum precisions; it is perfectly legal for a shader to specify a variable as mediump and for the shader compiler to generate fp32 data types. Desktop GPUs tend to only support fp32 computation, so the use of highp, mediump and lowp all map to fp32 data types (the precision qualifiers are only included to keep compatibility with OpenGL ES shaders, and can legally be ignored by the compiler). Mobile GPUs implementing OpenGL ES tend to map highp to fp32, and mediump and lowp to fp16. Detailed information can be found in the GLSL ES 3.0 Specification, Section 4.5.1

When binding a vertex attribute in memory to an input shader variable the storage and usage precisions are not required to match; the API will include transparent attribute precision conversion. It is perfectly legal for a user to upload e.g. a GL_FLOAT and then use it in a shader as a mediump fp16 variable, although it would be a waste of memory bandwidth to do so.

LJᛃ
  • 7,655
  • 2
  • 24
  • 35
solidpixel
  • 10,688
  • 1
  • 20
  • 33
4

In the absence of a MCVE demonstrating otherwise I assume you tried something like:

half float aHalfFloat;

However, "half" is a reserved keyword in #version 410:

OpenGL Shading Language 4.10 Specification, page 15 (emphasis mine):

The following are the keywords reserved for future use. Using them will result in an error:

common partition active asm class union enum typedef template this packed goto inline noinline volatile public static extern external interface long short half fixed unsigned superp input output hvec2 hvec3 hvec4 fvec2 fvec3 fvec4 sampler3DRect filter image1D image2D image3D imageCube iimage1D iimage2D iimage3D iimageCube uimage1D uimage2D uimage3D uimageCube image1DArray image2DArray iimage1DArray iimage2DArray uimage1DArray uimage2DArray image1DShadow image2DShadow image1DArrayShadow image2DArrayShadow imageBuffer iimageBuffer uimageBuffer sizeof cast namespace using row_major In addition, all identifiers

genpfault
  • 51,148
  • 11
  • 85
  • 139
  • So I should use lowp float? – ChaoSXDemon Jun 16 '17 at 18:10
  • @ChaoSXDemon: That won't do anything, see "4.5.2 Precision Qualifiers", bottom of page 55. – genpfault Jun 16 '17 at 18:13
  • so I can't use 16bit floats as an attribute in 410? – ChaoSXDemon Jun 16 '17 at 18:21
  • @ChaoSXDemon: Doesn't look like it. – genpfault Jun 16 '17 at 18:25
  • Alright thanks! I read the type part and the precision part and there are no mention of 16-bit floats. lowp is driver dependent :( – ChaoSXDemon Jun 16 '17 at 18:29
  • 3
    @ChaoSXDemon: You can use 16-bit floating point attributes. But you don't specify that *in the shader*. Just like the shader doesn't say that an attribute is a normalized integer or whatever. Outside of the basic type of the attribute (float, integer, uint, and double), the place where the format is determined is [in your `glVertexAttrib*Format/Pointer` call](https://www.khronos.org/opengl/wiki/Vertex_Format). And that accepts 16-bit floats. – Nicol Bolas Jun 16 '17 at 18:32
  • Yes I got it to run and compile with GL_HALF_FLOAT as the type and in the shader I used just "float" as the type. You are saying this setup will get me 16bit floats right? – ChaoSXDemon Jun 16 '17 at 19:00
  • 1
    All this says is that the GPU *reads* the data as half float values, [the actual representation used within the shader program is still 32bit floats](https://www.khronos.org/opengl/wiki/Small_Float_Formats#Half_floats). – LJᛃ Jun 18 '17 at 16:13