4

The following code provides two approaches that generate pairs of integers whose sum is less than 100, and they're arranged in descending order based on their distance from (0,0).

    //approach 1
    private static IEnumerable<Tuple<int,int>>  ProduceIndices3()
    {
        var storage = new List<Tuple<int, int>>();
        for (int x = 0; x < 100; x++)
        {
            for (int y = 0; y < 100; y++)
            {
                if (x + y < 100)
                    storage.Add(Tuple.Create(x, y));
            }
        }
        storage.Sort((p1,p2) =>
           (p2.Item1 * p2.Item1 + 
           p2.Item2 * p2.Item2).CompareTo(
           p1.Item1 * p1.Item1 +
           p1.Item2 * p1.Item2));
        return storage;
    }

    //approach 2
    private static IEnumerable<Tuple<int, int>> QueryIndices3()
    {
        return from x in Enumerable.Range(0, 100)
               from y in Enumerable.Range(0, 100)
               where x + y < 100
               orderby (x * x + y * y) descending
               select Tuple.Create(x, y);
    }

This code is taken from the book Effective C# by Bill Wagner, Item 8. In the entire article, the author has focused more on the syntax, the compactness and the readability of the code, but paid very little attention to the performance, and almost didn't discuss it.

So I basically want to know, which approach is faster? And what is usually better at performance (in general) : Query Syntax or Manual Loops?

Please discuss them in detail, providing references if any. :-)

Nawaz
  • 353,942
  • 115
  • 666
  • 851
  • 1
    The query has to devolve into a loop at some point, so there must be some overhead, even if the difference is negligible. – Ed S. Jan 20 '11 at 19:31
  • just from curiosity, could you explain this ordering?: orderby (x * x + y * y) descending i don't understand how it works – Notter Jan 20 '11 at 19:37
  • 1
    the second one will be a lot faster, since it doesn't do anything but building a query, after doing a ToList() you will ba able to compare ;) – Guillaume86 Jan 20 '11 at 19:41
  • Notter: it's the distance from 0 – Guillaume86 Jan 20 '11 at 19:42

3 Answers3

9

Profiling is truth, but my gut feeling would be that the loops are probably faster. The important thing is that 99 times out of 100 the performance difference just doesn't matter in the grand scheme of things. Use the more readable version and your future self will thank you when you need to maintain it later.

Eric Petroelje
  • 59,820
  • 9
  • 127
  • 177
  • If I may add, in the second method, the compiler can understand better the purpose of the code, so it can optimize it more properly. Maybe right now, even with optimizations, the first method will run faster, however, in a not so distant future, we might get better performance from the first one. – Bruno Brant Jan 20 '11 at 19:38
  • Profiling may be a little overkill for a small learning project. It's really easy to increase the number from 100 to 1,000,000 and just measure the difference in time with a stopwatch. – Phil Jan 20 '11 at 19:38
  • @Phil - I would consider that profiling, even though you aren't using a fancy tool to do it. – Eric Petroelje Jan 20 '11 at 20:11
4

Running each function 1000 times:

for loop: 2623 ms query: 2821 ms

looks logic since the second one is just syntaxic sugar for the first one. But i would use the second one for its readability.

Guillaume86
  • 14,341
  • 4
  • 53
  • 53
1

Though this doesn't strictly answer your question, performance-wise I would suggest merging that x+y logic into the iteration, thus:

for (int x = 0; x < 100; x++)
    for (int y = 0; y < 100 - x; y++)
        storage.Add(Tuple.Create(x, y));
Reinderien
  • 11,755
  • 5
  • 49
  • 77