I understand the general idea of what deferred execution is in terms of a collection, but...in the context of .NET, what does that have in common with reconstructing the same elements of a collection over and over again, instead of reading from a cache? Deferred execution is supposed to be about delaying the execution until it's actually necessary, sometimes allowing the program to forgo populating later elements in a collection if they're not needed and other optimizations like that. However, take the following example:
using System;
using System.Collections.Generic;
using System.Linq;
public class Program
{
public static void Main()
{
IEnumerable<NestedType> tests = GetTestCases();
foreach (NestedType test in tests)
{
Console.WriteLine(test.Value);
test.Value = "x";
Console.WriteLine(test.Value);
}
foreach (NestedType test in tests)
{
Console.WriteLine(test.Value);
}
}
private static IEnumerable<NestedType> GetTestCases()
{
return new[] {"a", "b", "c"}.Select(x => new NestedType {Value = x});
}
private class NestedType
{
internal string Value { get; set; }
}
}
The IEnumerable<NestedType>
that is returned by GenerateTests
is using deferred execution, but as it turns out, this doesn't only mean that it's waiting until an element in that collection is being accessed before running the lambda to compute it; it also means that it is running the lambda to re-compute and recreate the same exact elements it has already gone over. (Not the same elements in a completely literal sense, but they're still in the same "slots" of the collection.)
These two concepts seem very separate. Why were they put together like this? What does deferring execution of a function until it's needed have in common with refusing to read from a cache? Even if other elements aren't being accessed yet, why aren't the elements that have been accessed being stored consistently in an internal array or something and re-read?
What advantages/disadvantages does this have, and how does it affect coding in .NET?