If you rewrite the generator expression as a map
call (or, for 2.x, imap
):
max(map(len, words))
… it's actually a bit faster than the key version, not slower.
python.org 64-bit 3.3.0:
In [186]: words = ['now', 'is', 'the', 'winter', 'of', 'our', 'partyhat'] * 100
In [188]: %timeit max(len(w) for w in words)
%10000 loops, best of 3: 90.1 us per loop
In [189]: %timeit len(max(words, key=len))
10000 loops, best of 3: 57.3 us per loop
In [190]: %timeit max(map(len, words))
10000 loops, best of 3: 53.4 us per loop
Apple 64-bit 2.7.2:
In [298]: words = ['now', 'is', 'the', 'winter', 'of', 'our', 'partyhat'] * 100
In [299]: %timeit max(len(w) for w in words)
10000 loops, best of 3: 99 us per loop
In [300]: %timeit len(max(words, key=len))
10000 loops, best of 3: 64.1 us per loop
In [301]: %timeit max(map(len, words))
10000 loops, best of 3: 67 us per loop
In [303]: %timeit max(itertools.imap(len, words))
10000 loops, best of 3: 63.4 us per loop
I think it's more pythonic than the key
version, for the same reason the genexp is.
It's arguable whether it's as pythonic as the genexp version. Some people love map
/filter
/reduce
/etc.; some hate them; my personal feeling is that when you're trying to map a function that already exists and has a nice name (that is, something you don't have to lambda
or partial
up), map
is nicer, but YMMV (especially if your name is Guido).
One last point:
the redundancy of len being called twice seems not to matter - does more happen in C code in this form?
Think about it like this: You're already calling len
N times. Calling it N+1
times instead is hardly likely to make a difference, compared to anything you have to do N
times, unless you have a tiny number of huge strings.