I'm currently busy with deferred shading in webgl and i need to decode 3 integer value's (in the range [0..256] = 256^3) to a single 32 bit float and encode it later. because this is for WebGL it has to be done without bitwise operations. precision is not important for me (But can be achieved i think).
this is what i have but i think this is wrong because of the precision of the texture where i store the encoded value.
float packColor(vec3 color) { return (color.r + (color.g*256.) + (color.b*256.*256.)) / (256.*256.*256.); }
vec3 decodeColor(float f) {
float b = floor(f * 256.0);
float g = floor(f * 65536.0) - (b*256.);
float r = (floor(f * 16777216.0) - (b*65536.)) - (g*256.);
return vec3(r, g, b)/ 256.0;//vec3(r, g, b) / 256.0; }
thanks..