I’m writing some serialization code that will work at a lower level than I’m used to. I need functions to take various value types (int32_t
, int64_t
, float
, etc.) and shove them into a vector<unsigned char>
in preparation for being written to a file. The file will be read and reconstituted in an analogous way.
The functions to write to the vector look like this:
void write_int32(std::vector<unsigned char>& buffer, int32_t value)
{
buffer.push_back((value >> 24) & 0xff);
buffer.push_back((value >> 16) & 0xff);
buffer.push_back((value >> 8) & 0xff);
buffer.push_back(value & 0xff);
}
void write_float(std::vector<unsigned char>& buffer, float value)
{
assert(sizeof(float) == sizeof(int32_t));
write_int32(buffer, *(int32_t *)&value);
}
These bit-shifting, type-punning atrocities seem to work, on the single machine I’ve used so far, but they feel extremely fragile. Where can I learn which operations are guaranteed to yield the same results across architectures, float representations, etc.? Specifically, is there a safer way to do what I’ve done in these two example functions?