0

Similar to this question, I have 3 floats that represent the normals of a vertex, and I need to get them into a format that satisfies OpenGL's INT_2_10_10_10_REV format. But I haven't been able to get a working solution (the shading in my scene looks terrible as a result).

Each float can be between -1 and 1. Based on things I've seen I've tried to normalise the values and then pack them into a 10 bit integers but I'm clearly doing something wrong.

For a bit of context, this is how my code starts:

// Normalise and convert to integers (the 1023 is because I need 10 bit ints
int[] intNormals = new int[3] {
    Convert.ToInt32(1023 * ((2 * ((normals[2] - normals.Min()) / range)) - 1)),
    Convert.ToInt32(1023 * ((2 * ((normals[1] - normals.Min()) / range)) - 1)),
    Convert.ToInt32(1023 * ((2 * ((normals[0] - normals.Min()) / range)) - 1))
};

I then start trying to convert the values in intNormals to 10 bit ints which I'm trying to do with bitmasks and BitArrays. I'm probably making multiple mistakes there so rather than posting it all, my question is:

Are there any libraries for C# that will do this in a standard way? Or does anyone have a working C# implentation that they would be willing to share?

Update I got it working so I'll post this here (as an update rather than an answer) in case it's helpful for others. I used Vector3 to normalise the floats and then turned them to ints:

  Vector3 normalised = Vector3.Normalize(new Vector3(normals[0], normals[1], normals[2]));

  int[] intNormals = new int[3] {
    Convert.ToInt32(511 * normalised.X),
    Convert.ToInt32(511 * normalised.Y),
    Convert.ToInt32(511 * normalised.Z)
  };

Note, I was originally multiplying by 1023 instead of 511.

For each int I got the last ten bits like this

int mask = Convert.ToInt32("1111111111", 2);
int last10 = normal & mask;

...and then just copied the last 10 values of the above int to a bool array that I could turn into an array of byte[4].

The other thing I was doing wrong was I was putting them together in the wrong order ([w][x][y][z]). It should be [x][y][z][w] (w is the 2 unused bits at the end).

I think my use of arrays is probably a bit dodgy and I will investigate structs as a more elegant solution.

Richard
  • 1,731
  • 2
  • 23
  • 54
  • My answer here might help: https://stackoverflow.com/questions/55499116/c-sharp-extract-bit-ranges-from-byte-array/55505757#55505757. Bit sizzling is never fun, it never works the first few times, but it isn't that complicated – Flydog57 Apr 21 '23 at 03:13
  • Does this answer your question? [C# extract bit ranges from byte array](https://stackoverflow.com/questions/55499116/c-sharp-extract-bit-ranges-from-byte-array) – Rabbid76 Apr 21 '23 at 04:07
  • If I were to do this, I'd forget about arrays (they always break my heart when doing this kind of thing). Instead I'd create a `struct` containing a `uint` (i.e. 32 bits and unsigned) - compatible with the unmanaged type. I'd create a constructor for that struct that took 3 floats and three properties to read and write each of the three floats. The getters would swizzle out the appropriate 10 bits, shift them properly and then denormalize them back to a float. The setters would reverse the process. The constructor would simple inke the setters. (look in my link above to see how to do this) – Flydog57 Apr 21 '23 at 04:39
  • Thanks for the suggestions, really appreciate it. This is all fairly new to me so every step has been a real struggle. I haven't tried the struct approach yet so I'll spend today trying to get that to work and post back. – Richard Apr 21 '23 at 08:47

0 Answers0