There are quite some cases where this new class and infrastructure can help, but if they are common depends on your code...
As an example, see this pre-C# 7.2 implementation:
static void Main(string[] args)
{
byte[] file = new byte[] { 0, 1, 2, 3, 4, 5, 6, 7, 8 };
byte[] header = file.Take(4).ToArray();
byte[] content = file.Skip(4).ToArray();
bool isValid = IsValidHeader(header);
}
private static bool IsValidHeader(byte[] header)
{
return header[0] == 0 && header[1] == 1;
}
The file.Take(4).ToArray()
and byte[] content = file.Skip(4).ToArray();
are the problem here: we have to create a new array just to split the two parts of the byte array. When the size of the byte array gets bigger, you can imagine the impact on the performance and memory usage (imagine a 10 MB file
array, suddenly takes 20 MB in memory).
Now see the C# 7.2 implementation:
static void Main(string[] args)
{
byte[] file = new byte[] { 0, 1, 2, 3, 4, 5, 6, 7, 8 };
var header = file.Take(4);
var content = file.Skip(4);
bool isValid = IsValidHeader(header);
}
private static bool IsValidHeader(ReadOnlySpan<byte> header)
{
return header[0] == 0 && header[1] == 1;
}
Using ReadOnlySpan<T>
(part of the Span<T>
infrastructure) here makes it possible to use the data in the array, without duplicating it! The memory pressure is still 10MB. The array is not duplicated. And since arrays, lists and stream readers all use Span<T>
, you can build one common method for all sources.
Of course, this could have been implemented with IEnumerable<T>
too, but this performance is so much better (no endless skipping for example if you use the content
variable repeatedly).