I'm writing an optimized binary reader/writer for learning purposes by myself. Everything works fine, until I wrote the tests for the en- and decoding of decimal
s. My tests also include if the BinaryWriter
of the .NET Framework produces compatible output to my BinaryWriter and vice versa.
I'm mostly using unsafe and pointers to write my variables into byte-arrays. Those are the results, when writing a decimal via pointers and via the BinaryWriter
:
BinaryWriter....: E9 A8 94 23 9B CA 4E 44 63 C5 44 39 00 00 1A 00
unsafe *decimal=: 00 00 1A 00 63 C5 44 39 E9 A8 94 23 9B CA 4E 44
My code writing a decimal looks like this:
unsafe
{
byte[] data = new byte[16];
fixed (byte* pData = data)
*(decimal*)pData = 177.237846528973465289734658334m;
}
And using BinaryWriter of .NET Framework looks like this:
using (MemoryStream ms = new MemoryStream())
{
using (BinaryWriter writer = new BinaryWriter(ms))
writer.Write(177.237846528973465289734658334m);
ms.ToArray();
}
Microsoft made their BinaryWriter
incompatible to the way decimal
s are stored in memory. By looking into the referencesource we see that Microsoft uses an internal method called GetBytes, which means that the output of GetBytes is incompatible to the way decimals
are stored in memory.
Is there a reason why Microsoft implemented writing decimal
s in this way? May it be dangerous to use the way with unsafe
to implement own binary formats or protocols because the internal layout of decimals may change in the future?
Using the unsafe
way performs quite better than using GetBytes
called by the BinaryWriter
.