I am definitely missing something very obvious but can anyone explain why there is a lot better compression rate in second case?!
Case 1: very low compression and sometimes even growth in size.
using (var memoryStream = new System.IO.MemoryStream())
using (var gZipStream = new GZipStream(memoryStream, CompressionMode.Compress))
{
new BinaryFormatter().Serialize(gZipStream, obj);
gZipStream.Close();
return memoryStream.ToArray();
}
Case 2: a lot better compression and I did not get a size growth.
using (MemoryStream msCompressed = new MemoryStream())
using (GZipStream gZipStream = new GZipStream(msCompressed, CompressionMode.Compress))
using (MemoryStream msDecompressed = new MemoryStream())
{
new BinaryFormatter().Serialize(msDecompressed, obj);
byte[] byteArray = msDecompressed.ToArray();
gZipStream.Write(byteArray, 0, byteArray.Length);
gZipStream.Close();
return msCompressed.ToArray();
}
I have done mirrored decompression and in both cases I can deserialize it into source object without any issues.
Here are some stats:
UncSize: 58062085B, Comp1: 46828139B, 0.81%
UncSize: 58062085B, Comp2: 31326029B, 0.54%
UncSize: 7624735B, Comp1: 7743947B, 1.02%
UncSize: 7624735B, Comp2: 5337522B, 0.70%
UncSize: 1237628B, Comp1: 1265406B, 1.02%
UncSize: 1237628B, Comp2: 921695B, 0.74%