-3

I'm following a data structures and algorithms course taught in Python at the moment and the instructor always opts to unpack arrays (well, lists) using a range functions (method 2 below) instead of the kind of upacking I'm used to seeing in python. I'm surprised as in my opinion method 1 is more readable.

Is there something I am missing that makes method 2 preferable? I thought maybe speed but another post on here mentions that calling range is actually slower due to having to 'generate' a new number each loop. Other thing I was thinking is that the instructor is just trying to make the code more language agnostic to something that can be replicated in other languages more easily.

Code sample:

Given a hash table index location:

index_location = [[ 'a' , 1 ], [ 'b' , 2 ], [ 'c' , 3 ], [ 'd' , 4 ]]

Method 1:

for key, value in index_location:
     print(key)

Method 2:

for i in range(len(index_location)):
     print(index_location[i][0])
Sebastian Wozny
  • 16,943
  • 7
  • 52
  • 69
  • 4
    This question is offtopic as primarily opinion-based. You should rather ask your instructor for the reasons of their choice. I'd side with you here, the first approach is more "pythonic". – mozway Jul 06 '23 at 17:24
  • 1
    If you actually *need* the index as well as the value (for example, if you will be assigning a value back to the list), the usual idiom is `for idx, val in enumerate(X):`, rather than using `range()`. – jasonharper Jul 06 '23 at 17:35
  • 2
    Method 2 is something I would expect someone with a *long* history of writing C code to write out of familiarity, despite Method 1 being more idiomatic for Python. (Even if the syntax is different, the *idea* that you have to iterate over indices rather than iterating over a container directly can be quite ingrained.) – chepner Jul 06 '23 at 18:16
  • The unpacking part is mostly a side issue here. The real issue here is whether to iterate directly over the list elements or `range(len(list))`. It's generally considered more pythonic to iterate over the elements. – Barmar Jul 06 '23 at 18:17
  • Note that method 1 generalizes to any kind of generator and most collections, method 2 can only be used when the sequence has a length and can be indexed. You can't use method 2 for most generators, sets, dictionaries, etc. – Barmar Jul 06 '23 at 18:19

2 Answers2

0

I don't agree with the comment that this is just an opinion question since it can be tested. I do agree with the comments that boil down to "it depends on what you're trying to do." If your task only needs to do something for len(your list) times, then Method 2 is faster by almost 2x (you can test this by just replacing the tmp_lst.append below with pass in each test loop. However, if you need to access the elements in your list (which you do in your example), then Method 1 came out to be just a touch faster. This also has the benefit of being "more Pythonic" as in the other comments.

import random
import time

rand_list = [random.sample(range(10), 2)]*100000

method1_times = []
method2_times = []

for i in range(50):
    ## Method 1
    start = time.perf_counter()
    tmp_lst1 = []
    for key, value in rand_list:
        tmp_lst1.append([key, value])
    end = time.perf_counter()
    method1_times.append(end-start)
    
    ## Method 2
    start = time.perf_counter()
    tmp_lst2 = []
    for j in range(len(rand_list)):
        tmp_lst2.append([rand_list[j][0], rand_list[j][1]])
    end = time.perf_counter()
    method2_times.append(end-start)
    
print(f"Method 1 Average: {round((sum(method1_times)/len(method1_times)), 5)}")
print(f"Method 2 Average: {round((sum(method2_times)/len(method2_times)), 5)}")

Method 1 Average: 0.03439
Method 2 Average: 0.03759
astroChance
  • 337
  • 1
  • 10
  • opinion: if you need to optimize for loop, you should not use for loop at all instead of choosing fastest for loop version (vectorize, use numba, etc.) – dankal444 Jul 07 '23 at 09:36
  • Agreed, I would get rid of all of my `for` loops if I could! – astroChance Jul 07 '23 at 13:06
  • I would get rid only of those affecting performance :) for loops are not so bad when it comes to code readability (if not very nested) – dankal444 Jul 07 '23 at 13:32
0

We can use dis to check the difference of the emitted byte code:

import dis


def deconstruction_approach(index_location):
    l = []
    for key, value in index_location:
        l.append(key)


dis.dis(deconstruction_approach)
  4           0 RESUME                   0

  5           2 BUILD_LIST               0
              4 STORE_FAST               1 (l)

  6           6 LOAD_FAST                0 (index_location)
              8 GET_ITER
        >>   10 FOR_ITER                26 (to 64)
             12 UNPACK_SEQUENCE          2
             16 STORE_FAST               2 (key)
             18 STORE_FAST               3 (value)

  7          20 LOAD_FAST                1 (l)
             22 LOAD_METHOD              0 (append)
             44 LOAD_FAST                2 (key)
             46 PRECALL                  1
             50 CALL                     1
             60 POP_TOP
             62 JUMP_BACKWARD           27 (to 10)

  6     >>   64 LOAD_CONST               0 (None)
             66 RETURN_VALUE
def range_approach(index_location):
    l = []
    for i in range(len(index_location)):
        l.append(index_location[i][0])


dis.dis(range_approach)
  1           0 RESUME                   0

  2           2 BUILD_LIST               0
              4 STORE_FAST               1 (l)

  3           6 LOAD_GLOBAL              1 (NULL + range)
             18 LOAD_GLOBAL              3 (NULL + len)
             30 LOAD_FAST                0 (index_location)
             32 PRECALL                  1
             36 CALL                     1
             46 PRECALL                  1
             50 CALL                     1
             60 GET_ITER
        >>   62 FOR_ITER                35 (to 134)
             64 STORE_FAST               2 (i)

  4          66 LOAD_FAST                1 (l)
             68 LOAD_METHOD              2 (append)
             90 LOAD_FAST                0 (index_location)
             92 LOAD_FAST                2 (i)
             94 BINARY_SUBSCR
            104 LOAD_CONST               1 (0)
            106 BINARY_SUBSCR
            116 PRECALL                  1
            120 CALL                     1
            130 POP_TOP
            132 JUMP_BACKWARD           36 (to 62)

  3     >>  134 LOAD_CONST               0 (None)
            136 RETURN_VALUE

Nothing crazy here, a few CALLs less for the deconstruction_approach, the name index_location doesn't have to be loaded on every iteration, and we skip the BINARY_SUBSCR operators, trading them for the UNPACK_SEQUENCE operator.

It seems reasonable to assume the deconstruction_approach should be more performant.

Profiling:

import dis
import random

from performance_measurement import run_performance_comparison


def deconstruction_approach(index_location):
    l = []
    for key, value in index_location:
        l.append(key)


def range_approach(index_location):
    l = []
    for i in range(len(index_location)):
        l.append(index_location[i][0])


def setup(N):
    data = [[""] * 2 for _ in range(N)]
    for i in range(N):
        data[i][0] = chr(ord("a") + i)
        data[i][1] = random.randint(1, 100)

    return [data]


run_performance_comparison(
    approaches=[range_approach, deconstruction_approach],
    data_size=[10000, 20000, 30000, 100000, 200_000, 300_000, 500_000],
    setup=setup,
)

Result as expected:

enter image description here

Profiling Code:

import timeit
from functools import partial

import matplotlib.pyplot as plt
from typing import List, Dict, Callable

from contextlib import contextmanager
import matplotlib.pyplot as plt
import matplotlib.transforms as mtransforms
import matplotlib.ticker as ticker
import numpy as np



@contextmanager
def data_provider(data_size, setup=lambda N: N, teardown=lambda: None):
    data = setup(data_size)
    yield data
    teardown(*data)


def run_performance_comparison(approaches: List[Callable],
                               data_size: List[int],
                               *,
                               setup=lambda N: [N],
                               teardown=lambda *N: None,
                               number_of_repetitions=5,
                               title='Performance Comparison',
                               data_name='N',
                               yscale='log',
                               xscale='log'):
    
    approach_times: Dict[Callable, List[float]] = {approach: [] for approach in approaches}
    for N in data_size:
        with data_provider(N, setup, teardown) as data:
            print(f'Running performance comparison for {data_name}={N}')
            for approach in approaches:
                function = partial(approach, *data)
                approach_time = min(timeit.Timer(function).repeat(repeat=number_of_repetitions, number=1))
                approach_times[approach].append(approach_time)

    for approach in approaches:
        plt.plot(data_size, approach_times[approach], label=approach.__name__)
    plt.yscale(yscale)
    plt.xscale(xscale)

    plt.xlabel(data_name)
    plt.ylabel('Execution Time (seconds)')
    plt.title(title)
    plt.legend()
    plt.show()

Having said that, the clarity of intent and the explicitness of the deconstruction_approach really tip the scales for me, not the performance.

Sebastian Wozny
  • 16,943
  • 7
  • 52
  • 69