2

(How) can a scalar be cast to another scalar without conversion in GLSL?

The definition of "cast" used in this question is changing the interpretation of the exact same data without changing the data itself, assuming the two types are of the same number of bits. i.e. (float)B00101111 == B00101111

The definition of "conversion" used in this question is changing both the interpretation of the data as well as reformatting said data to some mathematical approximation of the original value, i.e. (float)1 == 1.0

5.4.3 of GLSL 4.30.6 spec implies that scalar float-to-int and int-to-float constructors perform only conversion: "When constructors are used to convert any floating-point type to an integer type, the fractional part of the floating-point value is dropped." - pg. 87

Another nail in the coffin is with this line on section 5.1, pg. 85: "There is no typecast operator; constructors are used instead."

The goal is to use a Buffer-Texture/Texture-Buffer and texelFetch(...) to obtain float vec4s (128bit) and fill a struct such as:

struct MyStruct
{
int foo; //32bit
float bar; //32bit
int baz; //32bit
short blah; //16bit
short hmm; //16bit
} // 128bit total

...but there is trouble pinning down how to break apart that vec4 into the binary sub-chunks and re-cast them to set said struct values.

Would the following, in theory work?

MyStruct myUuberChunk = MyStruct(myVec4);

The rationale behind using float vec4s (or int vec4s for that matter), pending it is correct, is to get all values in one 128bit fetch for bandwidth performance purposes (most cards have four 32-bit memory buses).

Charles
  • 50,943
  • 13
  • 104
  • 142
user515655
  • 989
  • 2
  • 10
  • 24

2 Answers2

6

Would the following, in theory work?

No.

GLSL does not allow you to directly reinterpret structures. While GLSL (3.30+) does guarantee that integers and floats are 32-bits, you can't just take a vec4 and pretend that it's something else.

However, OpenGL 3.3+ does allow you to reinterpret individual scalar values. First, your code should use an integer buffer texture, not a floating-point one. That's just to prevent the texel fetch logic from doing something unpleasant in the case of denomralized values, NaN, or other floating-point oddities.

Thus, you should be using a usamplerBuffer and getting back a uvec4.

To get the actual values, you have to do bit reinterpretation, provided by appropriate library functions:

struct Data
{
  int first;
  float value;
  int second;
  int half1;
  int half2;
};

Data UnpackStruct(uvec4 packed)
{
  Data ret;
  ret.first = int(packed[0]); //Sign bit is preserved, per GLSL standard.
  ret.value = uintBitsToFloat(packed[1]);
  ret.second = int(packed[2]);
  ret.half1 = (packed[3] >> 16) & 0x0000FFFF;
  ret.half2 = packed[3] & 0x0000FFFF;
  return ret;
}
Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • Thanks for mentioning uintBitsToFloat; I overlooked that and afterward realized I was looking at a GLSL 1.5 reference card. Could this be roughly how OpenCL makes global buffer accesses so flexible with structs? – user515655 Nov 28 '12 at 17:39
0

You can use Bitwise operations to decode\encode values especially bit shift. useful links 1, 2

Community
  • 1
  • 1
JAre
  • 4,666
  • 3
  • 27
  • 45