Sometimes it is necessary to use awkward/implausible constucts in one's code in order to lend a helping hand to retarded JIT compilers, since the powers that be in their wisdom decided that programs should be jitted before each use instead being run through an optimising compiler once at build time or at installation time.
Are there any existing conventions for tagging these awkward/implausible constructs, analogous to the /*FALLTHRU*/
thing that hearkens back to lint?
These constructs have to be tagged for similar reasons that fall-through cases in switch statements (in C/C++) need to be tagged with /*FALLTHRU*/
, less for the benefit of automated tools but more to protect them from well-meaning but short-sighted maintenance programmers (could be ourselves in a few weeks/months/years' time). Another benefit of tagging would be tuning one's builtin performance sense over time upon seeing enough properly tagged code. Adding warnings in prose would create more noise and it wouldn't be quite as effective...
A classic example that requires tagging would be:
class Foo
{
int[] m_a;
public void f ()
{
var a = m_a;
int n = a.Length;
// ...
for (int i = 0; i < a.Length; ++i)
a[i] = ...
}
}
Some of the MS jitters are able to create and manage the local shadow variable for the member array on their own, but none does so reliably all of the time. Also, at least some of them eliminate the bounds check in the loop body only if the loop test is against the .Length
member of the array being accessed in the loop body but not if the test is against a cached value (n
in the example). See also the topic Array bounds check efficiency in .net 4 and above.
Hence the shadow variable a
needs to be tagged as 'necessity proven by benchmark, not premature optimisation based on prejudice', and the use of a.Length
instead of n
needs to be tagged as 'don't touch this lest the wrath of $me be upon you'.
The reason I'm sensitive to this is that I can often squeeze enough performance out of tuned safe/managed C# so that I don't have to allow unsafe code or fart around with DLLs written in C++. However, these tunings are somewhat fragile. Also, creeping losses of 5% performance here and 10% there can be difficult to detect, but in my image processing code they can add up to many minutes or even hours over a workday. The difficulty of detecting such regressions is made worse by the fact that the quality of the input images has a tendency to drift, and it can vary considerably from one client to the next and also from one day to the next for the same client.
Hence the explicit tagging as an insurance measure against unnecessary performance loss (there are other measures but they aren't germane to the tagging question).
P.S.: there are other areas where tagging is required, such as the use of signed integers instead of unsigned ones because newer (!) jitters cannot turn unsigned division by constants into multiplication with the inverse as they do with signed integers. Some tight loops can become several times as fast if signed integers are used even if logic would demand unsigned integers and even though the signed semantics in general tend to require a certain overhead compared to unsigned math.