1

I cannot accurately reflect this problem into title. I want to use list, func(*args) and Pool.map without errors. Please see below.

▼Code

def df_parallelize_run(func, arguments):
    p = Pool(psutil.cpu_count())
    df = p.map(func, arguments)
    p.close()
    p.join()
    return df
def make_lag(df: DataFrame, LAG_DAY: list):
    for l in LAG_DAY:
        df[f'lag{l}d'] = df.groupby(['id'])['target'].transform(lambda x: x.shift(l))

    return df
def wrap_make_lag(args):
    return make_lag(*args)

Given above three functions, I want to do followings

# df: DataFrame
arguments = (df, [1, 3, 7, 13, 16])
df = df_parallelize_run(wrap_make_lag, arguments)

▼ Error

in df_parallelize_run(func, arguments)
----> 7     df = pool.map(func, arguments)

in ..../python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize)
--> 268         return self._map_async(func, iterable, mapstar, chunksize).get()

in ..../python3.7/multiprocessing/pool.py in get(self, timeout)
--> 657             raise self._value

TypeError: make_lag() takes 2 positional arguments but 5 were given

I know cause of this mismatch (owing to unpacking the list, [1, 3, 7, 13, 16], that's 5). How to do properly? If possible, I want to fit this list within constraint of positional arguments. If it is almost impossible (list or Pool.map), what is more appropriate, easy and flexible way?

Tomoand
  • 91
  • 1
  • 1
  • 7

2 Answers2

2

Use pool.starmap. You generate a list of tuples for the arguments to your function. Here, it looks like df is the same each time and arg is each element in arguments.

arglist = [(df, arg) for arg in arguments]
with multiprocessing.Pool(multiprocessing.cpu_count()) as p:
    results = p.starmap(make_lag, arglist)

Eric Truett
  • 2,970
  • 1
  • 16
  • 21
0

Solved. I re-wrote in following way.

▼Functions

def df_parallelize_run(func, arglist):    
    with Pool(psutil.cpu_count()) as p:
        # concat((lots of returned df))
        results = pd.concat(p.starmap(func, arglist), 1)
    return results
def make_lag(df, lag):
    if not isinstance(lag, list):
        lag = [lag]

    # it doesn't have to be for-loop when you use multiprocessing
    for l in lag:
        col_name = f'lag{l}d'
        df[col_name] = df.groupby(['item_id', 'store_id'])['sales'].transform(lambda x: x.shift(l))

    return df[[col_name]]

Other function

def make_lag_roll(df, lag, roll):
    col_name = f'lag{lag}_roll_mean_{roll}'
    df[col_name] = df.groupby(['id'])['target'].transform(lambda x: x.shift(lag).rolling(roll).mean())

    return df[[col_name]]

▼How to use

arglist =  [(df[['id', 'target']], arg) for arg in range(1, 36)]

lag_df = df_parallelize_run(make_lag, arglist)
arglist_roll = [(df[['id', 'target']], lag, roll)
               for lag in range(1, 36)
               for roll in [7, 14, 28]]

lag_roll_df = df_parallelize_run(make_lag_roll, arglist_roll)
Tomoand
  • 91
  • 1
  • 1
  • 7